Issue No. 040 · 19 March 2026 Subscribe →

Essay

No. 040

19 March 2026

12 min read

Attribution

The Reversed Fibonacci #3: Your Attribution Model Is Your Organization’s Rorschach Test

How attribution became the invisible governance layer of SaaS marketing

By Oxana Goul Originally on Substack ↗

Read the 1st episode here, 2nd episode here

In 1921, the Swiss psychiatrist Hermann Rorschach introduced a psychological test built on a simple premise. Show someone an ambiguous image — an inkblot with no inherent meaning — and ask what they see. One person sees a butterfly. Another sees two people dancing. A third sees nothing at all. The test does not measure the image. It measures the observer.

Business dashboards are supposed to work differently. They present numbers. Numbers are supposed to settle arguments. The report says what happened. But anyone who has spent time inside a SaaS company knows that pipeline reports rarely end debates. They start them. The CFO sees proof that demand generation is working, the CMO sees evidence of brand influence, sales leadership sees confirmation that the outbound sequence drives revenue, the board sees validation of the go-to-market model. Everyone is looking at the same report. Everyone sees something different.

The standard explanation is that executives interpret data through their incentives. That is partly true, but it misses what makes attribution reporting different. The ambiguity does not start with the reader. It starts with the model. The attribution model is the first observer of the inkblot. Executives reading the report perform the second interpretation. The real diagnostic is not how each function reads the dashboard. It is which model produced the report in the first place.

Every attribution model contains a theory of value creation

Attribution claims to answer the causal question : what caused a deal to happen? But it is structurally a correlational instrument. A B2B purchase unfolds across months, sometimes years, across multiple stakeholders and dozens of interactions: ads seen, content consumed, webinars attended, analyst reports read, peers consulted, sales conversations held. By the time the contract is signed, dozens of influences have accumulated. The events are observable. The causality is not.

Therefore, attribution constructs causality through rules. First-touch assumes value originates at discovery. Last-touch assumes value is created at conversion. Multi-touch distributes influence across the recorded journey according to a weighting scheme. Each rule produces a different explanation for the same deal. The data does not change. The story does.

This is what makes attribution political, not analytical. Two mechanisms make it possible. First: the model determines which events are allowed to exist in the causal narrative. If an interaction cannot be captured as a touchpoint — a conference conversation, a peer recommendation, an analyst briefing — it never enters the system. Second: the model determines which recorded events are emphasized. A weighting scheme elevates some interactions and marginalizes others, turning a sequence of contacts into a story about what mattered. In fact, every attribution model contains a theory of value creation. Change the model and the causal story changes with it. Change the causal story and the budget conversation follows.

Which leads to a surprisingly telling diagnostic question: which attribution model does your organization use as the default? The answer says more about the organization than almost any strategy document. An organization defaulting to last-touch believes, institutionally, that value is created at the moment of conversion. One defaulting to first-touch believes value is created at discovery. One that inherited its default without consciously choosing it has never examined its own theory of value creation, which is the most revealing Rorschach answer of all.

Which attribution model does your organization use as the default? The answer says more about the organization than any strategy document.

Choosing an attribution model is less like selecting a metric and more like drawing the electoral map before the vote. The model defines the districts, assigns weight to each interaction, and determines how credit will be counted once the deal closes. Once credit is defined, the rest follows predictably. Credit commands budgets. Budgets command headcount. Headcount commands legitimacy. Routinely presented in QBRs as an analytics problem — a reporting preference, a technical debate between competing models — attribution is none of these things. It is a governance mechanism. The fact that most organizations still treat it as an analytical decision tells you something about the analytical maturity of the organization, not the nature of the instrument.

Choosing the attribution model is writing the Constitution

Like most governance systems, attribution operates most powerfully when nobody recognizes it as governance. The attribution model is the constitution of the marketing organization. Most people working inside it have never read it. Many could not describe it precisely. Many don’t know it exists. Yet the organization sees the business through its lens, allocates budget according to its logic, and promotes people who are fluent in its language.

The most important feature of this constitution is rarely discussed: marketing did not write it. It emerged from the infrastructure of the SaaS era: venture reporting requirements encoded into CRM systems. Salesforce fields, pipeline stages, lead sources, campaign IDs: the scaffolding of measurement was already in place before most marketing teams realized it was governing them. Inside that architecture lived an implicit assumption that was never formally debated: value originates near the transaction. The interaction closest to conversion receives the clearest credit. Earlier influences become progressively harder to observe, to quantify, and to justify in a budget conversation.

Most marketing teams know, in principle, that multiple attribution models exist. Best-practice guidance recommends using different models for different questions. In theory, attribution is a flexible analytical tool. In practice, most organizations still operate with a single default model. Across B2B and SaaS companies today, multi-touch position-based models are often declared the standard. Yet day-to-day reporting frequently behaves much closer to last-touch attribution, particularly in dashboards tied directly to CRM or advertising platforms. Even companies that claim to use multi-touch usually default to one primary narrative when reporting pipeline performance. The declared model and the operational model are often different documents. The operational one governs.

Attribution is not an analytics problem, nor a reporting preference, nor a technical debate between models. Attribution is a governance mechanism.

Moving from last-touch to first-touch attribution does not change what happened. It changes who gets credit for what happened : a redistribution of organizational power dressed as a measurement upgrade. Every model change is, in effect, a constitutional amendment. Which is why model changes in SaaS organizations are rarely quick decisions : they are contested, delayed, and often resolved by whoever holds the most institutional authority, not whoever has the clearest analytical argument.

The constitution was ratified before the CMO arrived. Marketing teams gradually learned to operate within rules they never helped write and could rarely challenge. The function that controls the definition of value controls the budget conversation that follows.

The evidence factory that won every quarter

The previous article in this series showed how measurement infrastructure encoded the shift from demand creation to demand capture, making it structurally expensive to reverse. Attribution did something more consequential. It did not merely fail to capture demand creation, but every quarter manufactured evidence that demand capture was superior.

Loss Function B — maximize marketing-sourced pipeline contribution — did not win the budget argument just once. Attribution made it win every quarter. Each QBR produced fresh proof: these campaigns generated this pipeline, these channels drove this conversion, these programs contributed this revenue. The evidence was accurate within the system’s own logic. And cumulative. By the time anyone wanted to argue for demand creation investment, they were arguing against a body of quarterly evidence, not an absence of data. The burden of proof had shifted. Attribution shifted it.

That is a different, and more durable, kind of dominance than any single measurement decision could produce.

What the model sees, and what it was never meant to reach

The asymmetry becomes clearer when one considers where in the buying journey attribution sees most clearly. Sales operates at the moment attribution observes best : the opportunity appearing in the CRM, the meeting booked, the deal negotiated. Marketing often operates months earlier, shaping the conditions under which that meeting becomes possible in the first place. Attribution does not deliberately favor one function over the other. It favors the organizational function that operates where measurement is clearest. Activities closest to the visible moment of pipeline creation are recorded with precision and accumulate quarterly evidence of effectiveness. Activities that shaped markets upstream — analyst briefings that matured over eighteen months, thought leadership programs that built reputation across multiple quarters, category framing that shaped buying decisions years before any CRM record existed — left no trace in the attribution window that governed quarterly reporting. What the model could not see received zero credit, regardless of actual influence.

An even deeper limitation lies in what attribution systems cannot see at all. Attribution measures the conversations brands initiate. It is structurally blind to the conversations brands inspire: those conversations happen where no tracking pixel has ever reached. A recommendation shared in a Slack workspace, a founder mentioned in a podcast, an analyst report that places a company on a shortlist months before procurement begins: these interactions influence buying decisions constantly. They leave no trace inside the attribution system. This is not a temporary technical limitation awaiting a better tool. It is a structural property of digital measurement: attribution records interactions initiated by the company itself. The most powerful forms of influence in complex B2B markets often occur outside that perimeter, in conversations the company did not start and cannot instrument.

There is a tool that can surface what the model cannot reach: asking buyers directly what influenced their decision. Self-reported attribution — the win/loss interview, the post-purchase survey — consistently surfaces peer recommendation, brand reputation, and content consumed long before the sales cycle began. The case is most compelling in mid-market B2B with long sales cycles: nine months, eight stakeholders, decision processes that unfold largely outside any tracking system. These are the deals where the gap between what attribution sees and what actually happened is widest. Most SaaS organizations conduct win/loss interviews. The insights stop at the analyst’s slide deck. The dashboard does not update. The budget does not move. The objections to acting on self-reported data are sometimes methodologically valid. They are always structurally convenient.

Attribution does not distribute truth. It distributes confirmation.

Share

When the model reshapes behaviour and organizations

Attribution systems were introduced to answer a legitimate question: which marketing activities generate revenue? But attribution systems do not simply measure marketing behavior. They create the environment to which marketing behavior adapts. Once attribution became the basis for evaluation, teams began to optimize for the model, not because anyone intended to game the system, but because rational actors respond to the incentives embedded in their measurement environment.

Content calendars filled with keyword clusters because search-optimized articles generated measurable visits. Event teams optimized for badge scans because lead capture was the metric. Campaign structures were redesigned to maximize trackable influence across the buyer journey. Marketing operations teams studied model assumptions and built programs to score well against them. The objective quietly shifted from influencing the buyer to influencing the model. Vendors responded by introducing new models — multi-touch, U-shaped, algorithmic — and teams adapted again. The system did not evolve toward accuracy. It evolved toward equilibrium between measurement and adaptation, stable precisely because both sides had invested too much to abandon it. The arms race produced more measurement. Not more causality.

Attribution does not distribute truth. It distributes confirmation.

At this stage the measurement system became self-reinforcing. Executives relied on attribution reports to allocate budget. Vendors built sophisticated tools to defend the credibility of those reports. Marketing teams learned to operate inside the measurement environment that governed their evaluation. Complexity increased. Confidence increased with it. But the underlying ambiguity remained, because the original question — what actually caused the deal ? — was never observable.

Over time, the visibility bias began reshaping organizational structure itself. In many SaaS companies, functions once associated with sales development — SDR and BDR teams — migrated toward marketing organizations because their output fit neatly into the attribution framework: meetings booked, opportunities created, pipeline generated. Attribution did not merely reshape what marketing does: it reshaped where the boundary between marketing and sales sits. Activities that could be attributed moved toward the function accountable for attribution. Activities that could not be attributed drifted away from budget protection regardless of their strategic influence. Over time, organizations withdrew from the invisible parts of the system and concentrated on the visible ones. The dashboard reported progress. The function hollowed out.

How attribution trained a generation of marketers

The deeper consequence was not the bandwidth consumed by the arms race, or the organizational reshuffling it produced. It was what the bandwidth built. Hiring favored attribution fluency. Promotions rewarded dashboard literacy. Career trajectories developed inside the model: upward mobility came to those who could translate marketing activity into pipeline narrative, who understood the attribution architecture well enough to structure programs within it, who could defend budget allocations in the CFO’s language. These are real and valuable skills inside a system organized around an attributable pipeline.

The professionals who built SaaS marketing organizations in the decade from 2015 to 2025 were formed, in significant part, by this environment. The capabilities that compound over longer horizons — brand building, category creation, market intelligence, positioning work that precedes the sales cycle by years — did not generate the same formation opportunities, because the incentive architecture that governed hiring and promotion was built around what the attribution model could quarterly prove. This is how scope contraction becomes self-reproducing. Not because anyone decided strategic capability was unimportant. Because the formation system gradually stopped rewarding it.

And then the spiral closed on itself. The people promoted inside the system became the ones making the next round of hiring decisions. Each generation selected for attribution literacy, each cohort making the next one more likely to be selected on the same terms. A Fibonacci contraction operating not just at the organizational level, but at the level of human capital.

Image created with Gemini Nano Banana 2

Forget the trap, once you’ve got the fish

Some rare organizations reached a different relationship with the instrument. They employed multiple models simultaneously, each answering a specific question: first-touch to understand where initial awareness forms, last-touch to understand what captures existing intent, and multi-touch to map how channels interact over time. None was treated as a definitive verdict on what caused growth. Each was a scalpel for a specific cut, not a constitution for all decisions. These organizations maintained the distinction between the instrument and the thing it was supposed to measure. Their rarity is itself evidence of how strong the institutional pressures toward single-model dependence were.

What makes single model dependence so durable is the cognitive process that sustains it. Marketing’s contribution to revenue was once a strategic judgment, one that required evaluating how brand preference, category authority, and market education translated into durable demand. Attribution replaced that judgment with a calculation. Once the calculation was running, the original question slowly disappeared. The organization no longer asked what actually caused buyers to choose one vendor over another. It asked what the attribution model said caused the deal. This process has a name: surrogation. The proxy metric gradually becomes the thing it was supposed to measure. The same pattern has run through this series at every level: a measure becomes a target, a target becomes the objective, the instrument replaces the reality it was designed to observe.

Zhuangzi, the Chinese philosopher, described the mechanism more than two thousand years ago: “The fish trap exists because of the fish. Once you’ve got the fish, you can forget the trap. Words exist because of meaning. Once you’ve got the meaning, you can forget the words. Where can I find a man who has forgotten words so I can have a word with him?”

SaaS marketing did the opposite. It optimized the trap. And gradually stopped asking about the fish.

What the precision concealed

The rationality of each individual move — each model selection, each budget reallocation, each hiring decision, each quarterly report — compounded across a decade into an outcome no individual decision maker would have chosen. Attribution did not merely measure marketing performance. It redistributed organizational power toward the activities it could see, manufactured the quarterly evidence that made that redistribution feel rational, and trained a generation of professionals inside the logic it had established. The system worked. It worked precisely. And it concealed the cost of its own precision.

Both sides of the sales-and-marketing credit debate produce accurate reports about different things. Sales: the SDR sequence triggered the meeting. Marketing: eighteen months of content primed the buyer. Both correct. Neither measuring what actually determined the outcome. Clean attribution reports made accountability feel precise. That precision masked a structural misalignment between accountability and authority over the variables that actually determine pipeline conversion. The next article will show what that misalignment cost and why the bill arrived so late.

This is the third in a series on how SaaS marketing lost its strategic foundations. Subscribe to follow the argument.

Leave a comment

← Previous essay

The Reversed Fibonacci #2: When Measurement Becomes Strategy

Next essay →

The Reversed Fibonacci #4: Win the Quarter, Lose the Market

Keep reading

Every other Tuesday. One essay. Zero noise.

8,400+ subscribers. No tracking. No sponsors. One click to leave.