The Compounding Math That's Killing Your Marketing Campaigns

By
Kate O'Keeffe
April 2, 2026
4
min read
Share this post

The Decision Stack No One Audits

Every marketing campaign is a stack of decisions. And if the foundation is wrong, everything built on top of it is too.

Most marketers know this intuitively. What they don't do is run the numbers.

Before a single creative asset goes live, a campaign requires a sequence of judgment calls: which audience segment to target, which job-to-be-done matters most to them, which buying driver to lead with, which product benefit to feature, which creative direction to run.

Each decision feels considered. Each is informed by research, experience, and instinct. But here's the uncomfortable reality: surveys — the primary research tool underpinning most of those calls — predict purchase intent at 20–30% accuracy. They measure what people say they'll do, not what they actually do.

That's the say/do gap. And it compounds.

The Math Compounds Fast

If your audience segment decision is right 25% of the time, and your read on job-to-be-done is right 25% of the time, and your buying driver selection is right 25% of the time — you don't end up with a campaign that's 25% right. You end up with one that's 0.1% right.

That's how probability works when you stack dependent decisions. Five calls at 25% accuracy each: 0.25 × 0.25 × 0.25 × 0.25 × 0.25 = 0.00098. Less than one tenth of one percent.

Most marketing teams don't frame it this way. They treat a poor campaign as a creative problem — wrong headline, wrong image, wrong platform. But if the upstream decisions were built on guesswork, better creative won't save you. You're decorating a broken foundation.

This isn't a reflection on the quality of marketing teams. The best marketers in the world are operating with fundamentally unreliable inputs. They've gotten exceptional at making judgment calls under uncertainty — because they've had no choice. But that's changing.

When the "Obvious" Answer Loses

The stakes of getting this wrong aren't theoretical. In a recent two-day live market experiment run through Heatseeker, 30% cash back underperformed against no cash back at all.

That result runs counter to almost every instinct a marketer would bring to the briefing room. More discount = more conversion. It's obvious. Except it wasn't right.

This kind of counterintuitive finding happens consistently in live experiments, because live experiments measure behaviour — not stated preference. Real people, making real choices, in a real market context. Not answering a survey question about what they think they'd do.

Live market experiments correlate to real-world buying behaviour at 85%+ accuracy. That's not a marginal improvement on the 20–30% you get from surveys. It's a different category of signal entirely.

The Case for Covering the Full Stack

The answer isn't to replace every decision with a live experiment — that's not practical at campaign speed. The answer is to cover the decision stack with the right tool at the right layer.

Live experiments give you the behavioural ground truth: which buying driver actually moves people, which offer structure wins, which value proposition resonates in a real market context. That validated data then becomes the highest-quality training signal for synthetic personas — AI avatars built on real behavioural data that can accelerate decision-making across the rest of the stack.

You need both. The accuracy of live experiments to establish what's actually true. The speed of synthetic personas to apply that truth across the full range of decisions. This is what it looks like to go fast and go right.

The Standard Needs to Change

The current approach — making 200 marketing decisions a month, validating three of them, and hoping the rest are close enough — isn't a resource problem or a time problem. It's a data infrastructure problem.

A two-day live experiment can resolve a decision that would otherwise be a $500K guess. Synthetic personas trained on verified behavioural data can stress-test a campaign positioning in hours. The question is no longer whether you can validate marketing decisions with behavioural data. The question is why you'd keep building campaigns on a foundation you know is 70–80% wrong.

See how Heatseeker closes the say/do gap →


This article was adapted from a post originally shared on LinkedIn by Kate O'Keeffe, CEO & Co-Founder of Heatseeker.

Share this post
Kate O'Keeffe

The Compounding Math That's Killing Your Marketing Campaigns

By
Kate O'Keeffe
April 2, 2026
4
min read
Share this post

The Decision Stack No One Audits

Every marketing campaign is a stack of decisions. And if the foundation is wrong, everything built on top of it is too.

Most marketers know this intuitively. What they don't do is run the numbers.

Before a single creative asset goes live, a campaign requires a sequence of judgment calls: which audience segment to target, which job-to-be-done matters most to them, which buying driver to lead with, which product benefit to feature, which creative direction to run.

Each decision feels considered. Each is informed by research, experience, and instinct. But here's the uncomfortable reality: surveys — the primary research tool underpinning most of those calls — predict purchase intent at 20–30% accuracy. They measure what people say they'll do, not what they actually do.

That's the say/do gap. And it compounds.

The Math Compounds Fast

If your audience segment decision is right 25% of the time, and your read on job-to-be-done is right 25% of the time, and your buying driver selection is right 25% of the time — you don't end up with a campaign that's 25% right. You end up with one that's 0.1% right.

That's how probability works when you stack dependent decisions. Five calls at 25% accuracy each: 0.25 × 0.25 × 0.25 × 0.25 × 0.25 = 0.00098. Less than one tenth of one percent.

Most marketing teams don't frame it this way. They treat a poor campaign as a creative problem — wrong headline, wrong image, wrong platform. But if the upstream decisions were built on guesswork, better creative won't save you. You're decorating a broken foundation.

This isn't a reflection on the quality of marketing teams. The best marketers in the world are operating with fundamentally unreliable inputs. They've gotten exceptional at making judgment calls under uncertainty — because they've had no choice. But that's changing.

When the "Obvious" Answer Loses

The stakes of getting this wrong aren't theoretical. In a recent two-day live market experiment run through Heatseeker, 30% cash back underperformed against no cash back at all.

That result runs counter to almost every instinct a marketer would bring to the briefing room. More discount = more conversion. It's obvious. Except it wasn't right.

This kind of counterintuitive finding happens consistently in live experiments, because live experiments measure behaviour — not stated preference. Real people, making real choices, in a real market context. Not answering a survey question about what they think they'd do.

Live market experiments correlate to real-world buying behaviour at 85%+ accuracy. That's not a marginal improvement on the 20–30% you get from surveys. It's a different category of signal entirely.

The Case for Covering the Full Stack

The answer isn't to replace every decision with a live experiment — that's not practical at campaign speed. The answer is to cover the decision stack with the right tool at the right layer.

Live experiments give you the behavioural ground truth: which buying driver actually moves people, which offer structure wins, which value proposition resonates in a real market context. That validated data then becomes the highest-quality training signal for synthetic personas — AI avatars built on real behavioural data that can accelerate decision-making across the rest of the stack.

You need both. The accuracy of live experiments to establish what's actually true. The speed of synthetic personas to apply that truth across the full range of decisions. This is what it looks like to go fast and go right.

The Standard Needs to Change

The current approach — making 200 marketing decisions a month, validating three of them, and hoping the rest are close enough — isn't a resource problem or a time problem. It's a data infrastructure problem.

A two-day live experiment can resolve a decision that would otherwise be a $500K guess. Synthetic personas trained on verified behavioural data can stress-test a campaign positioning in hours. The question is no longer whether you can validate marketing decisions with behavioural data. The question is why you'd keep building campaigns on a foundation you know is 70–80% wrong.

See how Heatseeker closes the say/do gap →


This article was adapted from a post originally shared on LinkedIn by Kate O'Keeffe, CEO & Co-Founder of Heatseeker.

Share this post
Kate O'Keeffe

Similar articles

Hire us to build a website using this template. Get unlimited design & dev.
Webflow logo
Buy this Template
All Templates