Essay
No. 039
4 March 2026
12 min read
Measurement
How SaaS marketing built a perfect system for optimizing the wrong thing.
“The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it was intended to monitor.” — Donald T. Campbell, 1979
Every operational system has a loss function, a definition of success it optimizes for, rewards, and over time, enforces. Not always written down. Not always consciously chosen. But always present, shaping every budget decision, every performance review, every quarterly narrative. In SaaS marketing, that loss function changed. Not through a strategic offsite. Not through a board decision. Through instrumentation, incentive alignment, and the slow accumulation of reporting dependencies. The function shifted from building durable demand to maximizing pipeline contribution. Everything that followed was rational inside that objective. And everything that mattered most became progressively harder to justify inside it.
“If you can’t measure it, you can’t manage it.” Usually credited to Drucker, possibly Deming, disputed by both. The uncertainty is fitting, the phrase outgrew its author decades ago. It became a management principle, then an operating assumption, then a philosophical mandate that licensed an entire infrastructure. In SaaS, that mandate arrived at precisely the moment capital markets were demanding predictability, boards were demanding quarterly narratives, and a rapidly growing MarTech industry was offering to deliver both. The correction was real and necessary. Pre-2010 marketing had operated with too much faith and too little evidence. The demand for accountability was legitimate.
But a principle that begins as a call for rigor can harden into something more dangerous: the silent assumption that what resists measurement does not need to be managed. From there, the logic proceeds in four steps that nobody announces and everyone follows : Measure whatever can be easily measured. Disregard what cannot. Presume that what cannot be measured is unimportant. Presume that what cannot be measured does not exist.
It’s not policy. It’s gravity.
There is a comforting fiction that metrics are descriptive, that they simply reflect what is happening. In practice, they are selective. The map is not the territory. But an organization that has been navigating by the map long enough eventually forgets there is a difference.
A lens shapes how you see something. A filter determines what is allowed into the field of view at all. Measurement systems do not merely clarify reality. They determine which parts of it are allowed to count. Over time, the filter becomes the organization’s definition of reality. When an organization installs a measurement system, it acquires a filter that determines which activities are legible, which are defensible, and which can claim resources. The forces that determine whether a company wins — brand preference, category authority, buyer trust — operate upstream of the pipeline and do not register inside a pipeline dashboard. What the dashboard cannot see, the budget conversation cannot reach.
Over time, the measurement system does not merely report on the organization. It reorganizes it. Capabilities migrate toward what the system can see. Capabilities it cannot see atrophy. Eventually, the organization becomes optimized for the measurement system itself, not for the market it is supposed to serve.
Marketing did not lose power because it became less valuable. It lost power because the definition of value narrowed to what the measurement infrastructure could recognize.

Robert McNamara was U.S. Secretary of Defense during the Vietnam War, one of the original “Whiz Kids,” the team of statistical analysts who rebuilt Ford Motor Company after World War II, now applying quantitative discipline to military strategy. He chose enemy body count as the primary metric of war progress, because it was the thing that could be reliably counted. The numbers were real, the methodology consistent, the reporting rigorous. The numbers kept improving while the war was being lost.
The problem was not measurement error. The problem was that the system had embedded the wrong loss function. Body count measured tactical output, not strategic outcome. It captured what could be easily counted, not what actually determined victory: legitimacy, morale, political stability, population allegiance. The system optimized precisely for what it measured. And because the numbers were precise, they carried authority. Precision substituted for relevance.
The most dangerous measurement error is not inaccuracy. It is a precise answer to the wrong question.
Goodhart’s Law identifies the corruption of individual metrics when they become targets. Donald Campbell, whose formulation opens this article, identified a deeper mechanism. Campbell’s Law does not describe a metric drifting. It describes an organization restructuring itself around the metric: incentives shifting, behaviors adapting, capabilities atrophying, until the instrument and the institution have fused. The metric did not distort the function. It reorganized it.
The most dangerous measurement error is not inaccuracy. It is a precise answer to the wrong question.
SaaS marketing accepted McNamara’s logic without McNamara’s excuse. He was navigating genuinely novel asymmetric warfare with incomplete doctrine. SaaS marketing adopted the body count model voluntarily, in a commercial context where better questions were available. The reason it did not ask them is institutional, and that is the real argument.
At the core of what happened is not a measurement failure. It is the resolution of a conflict between two competing definitions of what marketing is for.
Loss Function A: Build durable demand and market preference. Invest in brand, category authority, and buyer trust before the sales cycle begins. Accept that returns are delayed, diffuse, and difficult to attribute. Optimize for market position over time.
Loss Function B: Maximize marketing-sourced pipeline contribution. Generate measurable leads, track conversion through the funnel, report contribution to quarterly revenue. Optimize for legibility inside the financial reporting cycle.
In practical terms, Loss Function A corresponds to demand creation. Loss Function B corresponds to demand capture. Both are internally rational. Both solve real problems. Both have institutional sponsors. Loss Function A is the marketer’s argument: it describes how markets actually work, how preference compounds, how brand reduces acquisition cost over time. Loss Function B is the CFO’s argument: it describes how budgets get justified, how functions earn credibility, how marketing proves its seat at the revenue table. In most institutional settings, the loss function aligned with financial reporting prevails.
Marketing lost power because the definition of value narrowed to what the measurement infrastructure could recognize.
The shift from A to B was not incompetence. It was a response to institutional pressure arriving from four directions simultaneously. Finance demanded legibility: an investment that could not be connected to a measurable output within the reporting cycle was not a credible investment, it was an act of faith. Investors demanded predictability: VC pressing for the growth rate that justified the last valuation, PE pressing for the EBITDA margin that justified the acquisition multiple. Boards demanded quarterly narratives: Loss Function A produces a story that takes eighteen months to validate; Loss Function B produces one every ninety days. MarTech promised visibility: a rapidly scaling industry offered to make Loss Function B operationally real, to track every lead, attribute every conversion, and report marketing’s contribution to pipeline with apparent precision.
Pipeline won because it aligned with capital markets, forecasting models, and compensation plans. Demand creation did not. This was not a failure of strategic imagination. It was an entirely rational institutional response to the incentives on the table.
What gets optimized disappears upstream first. Brand investment weakened, not because executives stopped believing in brand, but because brand could not be expressed inside Loss Function B. Category creation declined, not because it lost strategic value, but because its effects materialized outside the measurement window. Thought leadership diminished, not because it stopped influencing buyers, but because its influence was diffuse and temporally delayed. Performance marketing expanded, not because it was always superior, but because its outputs were immediately legible. The system was not irrational. It was optimizing exactly what it had been configured to optimize.
Some companies resisted the spiral. Snowflake defined the “Data Cloud” before scaling demand capture. HubSpot institutionalized “Inbound Marketing” before optimizing pipeline conversion. They invested in category definition and thought leadership before those efforts could be attributed to marketing-sourced revenue. As a result, preference existed before the sales conversation began. That preference did not first appear in dashboards. It appeared in pricing power, win rates, and negotiation leverage. The effects were visible in the market before they were legible in the reporting system.
Sustained brand-led strategy is possible. But under institutional pressure, pipeline-first optimization became the default.
Institutional choices become durable when infrastructure encodes them. This is what MarTech did to Loss Function B.
Jon Miller, co-founder of Marketo, has put a number on it. He calls it the Marketing Automation Tax. A company generating $150 million in annual revenue typically invests around 10% in marketing, roughly $15 million. Of that, around 20%, or $3 million, goes to MarTech tools. The marketing automation platform alone costs $100,000 to $200,000 per year. And yet, by Miller’s estimates, two-thirds of that spend delivers no realized value. The measurement infrastructure built to justify marketing’s strategic contribution now consumes a significant portion of the marketing budget itself.
That imbalance is not an accident. It reflects a vendor ecosystem with a structural incentive to expand complexity, not reduce it. Every new integration, attribution model, and connector increases dependence. Complexity becomes defensible because it is measurable.
Brand preference does not appear in dashboards. It appeares in pricing power, win rates, and negotiation leverage.
There is another way to think about measurement.
The National Gallery does not rely on sensors or dashboards to understand which painting matters most to visitors. In multiple interviews, its director has pointed to something simpler. The wooden floor in front of The Execution of Lady Jane Grey by Paul Delaroche is more worn than anywhere else in the museum. People stand there longer. The floor knows. This is not a metric in the technical sense. It is an observation, and it is sufficient. That worn patch becomes the starting point for decisions : about room placement, lighting, exhibition emphasis, crowd management. One signal. Multiple real consequences.
Not every meaningful signal requires instrumentation. Sometimes the evidence is already present, visible to anyone willing to look without opening a dashboard.
The industrialization of Loss Function B followed a five-step sequence. Pipeline contribution became the dominant metric. MarTech vendors built systems to measure and maximize it. Budgets rotated toward measurable channels. Data architecture embedded those channels into forecasting models. Switching away became economically and operationally costly, not because anyone designed a trap, but because each individual decision was rational given the previous one.
A CRM embedded as the system of record for pipeline, contacts, and opportunity management redefines what counts as a lead, what counts as marketing contribution, what counts as a deal won or lost. Switching it is not a technology decision. It is an organizational restructuring. Marketing walked into this dependency willingly, because the system offered something genuinely valuable: credibility in the language the CFO understood. The dependency that followed was the price.
This is technical debt applied to objective functions. In software, technical debt accumulates when short-term decisions make future changes expensive. The same dynamic operates here. Once dashboards, forecasting models, compensation plans, and CRM architecture are all wired around pipeline contribution, changing the loss function is not a cultural shift. It is a systems migration. The infrastructure does not merely reflect the chosen objective. It protects it from revision.
A parallel is forming in real time. AI infrastructure platforms are replicating this dynamic at greater speed. The vendors building deepest into workflow layers — the CRM, the data layer, the collaboration infrastructure — are positioning to define not just what gets measured, but what gets done. Whichever platform defines the workflow will define the measurement. Whichever defines the measurement will define perceived value. The mechanism is identical. The speed of lock-in is faster. This will be examined in a later article.
Measurement bias does not produce immediate collapse. It produces delayed fragility. The damage accumulates below the threshold of the dashboard, and by the time it becomes visible, it is already structural.
The causal chain is financial, not philosophical. Underinvestment in brand weakens preference. Weaker preference raises acquisition cost. Higher acquisition cost increases dependence on performance marketing. Diminishing returns in competitive channels inflate CAC further. But the dashboard does not show this chain. It shows each link independently, at different moments, with no annotation connecting cause to consequence. MQL volume rises in Q2. Win rates soften in Q3. CAC increases in Q4. Leadership diagnoses three separate problems. The underlying cause remains invisible, because the measurement system cannot see itself. Under those conditions, every number in the marketing dashboard can improve while the company becomes structurally less likely to win.
The dashboard reported progress. The market was quietly withdrawing consent. Pipeline metrics may remain stable or improve, while this unfolds, because pipeline measures activity, not preference. The system detects failure only after it has become structurally embedded. That is why the bill arrives late. And why, when it arrives, it is much larger than anyone expected.
Measurement is indispensable. Organizations cannot operate without feedback systems. But measurement systems can only optimize within the objective they are given. They cannot determine whether that objective is correct. That decision is irreducibly strategic. It precedes instrumentation. It cannot be delegated to infrastructure. The dashboard can tell you whether pipeline is increasing. It cannot tell you whether optimizing pipeline is building a stronger company, or merely harvesting its remaining accessible demand. Those questions exist upstream of measurement. They must be answered before the dashboard opens.
It is possible for every number in the marketing dashboard to improve while the company becomes structurally less likely to win.
Marketing did not lose power because it forgot how to build brands or shape market perception. It lost power because the systems it adopted made those activities organizationally invisible : illegible to the CFO, absent from the board deck, undefendable in a budget conversation without a pipeline attribution to anchor them. What an organization cannot see with its measurement instruments, it stops funding. What it stops funding, it stops doing. What it stops doing, it eventually loses the ability to do at all. The organizational muscle does not disappear because people forget concepts. It disappears because the organization stops creating conditions in which those concepts can be exercised.
Attribution systems did not solve this problem. They stabilized it. They provided a coherent narrative explaining pipeline creation inside the existing measurement framework, making Loss Function B legible, defensible, and optimizable. But attribution operates within the ontology the measurement system defines. It can only assign credit to what the system can observe. Once the measurement system defines the objective, and attribution defines the causal narrative, the organization Read the first episode here.becomes fully enclosed inside its own instrumentation. That enclosure — how it formed, how it shapes behavior, and how it was gamed — is the subject of the next article.
The trap was not measurement. It was mistaking measurement for strategy. The dashboard told you, with increasing precision, how well you were doing what you were already doing. It could not tell you whether what you were doing was the right thing. And once that confusion was embedded in infrastructure, changing it required more than strategic intent. It required reorganizing the measurement system itself — which is harder than any strategy offsite, and slower than any quarter.
The enemy is not measurement. It is the conviction that the report contains the answer to a question you never asked.
This is the second in a series on how SaaS marketing lost its strategic foundations. Subscribe to follow the argument. Read the 1st episode here.
Thanks for reading! Subscribe for free to receive new posts and support my work.
Keep reading