By
Kate O'Keeffe
April 2, 2026
•
4
min read

A CMO at a top-10 CPG company told me something that stopped me mid-conversation.
She makes roughly 200 marketing decisions a month. Messaging. Creative. Targeting. Pricing. Channel mix. Decision after decision, each one committing budget, shaping customer perception, or setting the trajectory for the next call.
How many does she validate with behavioural data before committing?
Two. Maybe three.
The rest? Gut feel, precedent, or whatever her agency recommended this quarter. And before you write her off as an outlier: she's not. She's exactly the norm.
There's a pattern in how marketing organisations approach validation. They invest heavily in the big strategic bets — the rebrand, the new market entry, the category repositioning. These get research budgets, agency workshops, and months of deliberation.
What doesn't get validated is everything else.
The micro-decisions. The hundreds of smaller calls that collectively represent millions in committed spend — which audience cluster to lead with, which product benefit to feature in the email, which discount structure to test, which creative hook to run on Meta. These get made at pace, often by junior teams, with no behavioural signal to guide them.
The intuition behind this makes sense. Validation takes time. You can't research every call. So you save the rigorous process for the decisions that look strategic and trust your team's judgment on the rest.
The problem: that judgment is wrong more often than anyone wants to admit.
When marketing teams have tested their micro-decisions with live behavioural experiments — the kind that measure real choices, not stated preferences — the results are consistently uncomfortable.
Roughly 40% of the time, the “obvious” answer loses. Not by a little. It finishes last.
The thing that was supposed to win — the higher discount, the features-led message, the broader audience — underperforms the alternative the team considered secondary, or didn't consider at all.
That means nearly half your unvalidated marketing decisions are actively wrong. Not unlucky. Wrong. Because what your team thinks customers want and what customers actually do are different things. This is the say/do gap — and it's playing out at scale, invisibly, across every marketing organisation running on survey data and instinct.
The marketers making these calls are skilled. Many are exceptional. But they're operating with fundamentally unreliable inputs — survey-based research tools that predict purchase intent at 20–30% accuracy — and making hundreds of consequential calls per month based on that signal.
This isn't a failure of judgment. It's a failure of infrastructure.
Live market experiments — two-day tests that put real offers in front of real audiences and measure actual behaviour — can resolve micro-decisions that would otherwise be expensive guesses. And synthetic personas trained on verified behavioural data can extend that accuracy to the decisions that need to move even faster. What used to require months of consumer research can now be resolved in 48 hours with behavioural-level confidence.
Here's a useful frame: the CFO validates financial assumptions before committing capital. Every major spend decision goes through a model, a forecast, a stress test. The logic is simple — when you're about to commit real money, you want evidence that the assumption it's built on is sound.
Marketing spend is subject to the same logic. Yet the assumptions underpinning most campaign decisions — the audience, the message, the offer — never see the inside of an evidence review.
The CMO should hold marketing assumptions to the same standard the CFO holds financial ones. Not for every decision, but for the decisions that cascade — where getting the upstream call wrong compounds the cost downstream.
Two days of live experimentation to validate the decisions that drive hundreds of thousands in spend isn't a luxury. It's the minimum standard for running a disciplined marketing function.
See how Heatseeker closes the say/do gap →
This article was adapted from a post originally shared on LinkedIn by Kate O'Keeffe, CEO & Co-Founder of Heatseeker.

A CMO at a top-10 CPG company told me something that stopped me mid-conversation.
She makes roughly 200 marketing decisions a month. Messaging. Creative. Targeting. Pricing. Channel mix. Decision after decision, each one committing budget, shaping customer perception, or setting the trajectory for the next call.
How many does she validate with behavioural data before committing?
Two. Maybe three.
The rest? Gut feel, precedent, or whatever her agency recommended this quarter. And before you write her off as an outlier: she's not. She's exactly the norm.
There's a pattern in how marketing organisations approach validation. They invest heavily in the big strategic bets — the rebrand, the new market entry, the category repositioning. These get research budgets, agency workshops, and months of deliberation.
What doesn't get validated is everything else.
The micro-decisions. The hundreds of smaller calls that collectively represent millions in committed spend — which audience cluster to lead with, which product benefit to feature in the email, which discount structure to test, which creative hook to run on Meta. These get made at pace, often by junior teams, with no behavioural signal to guide them.
The intuition behind this makes sense. Validation takes time. You can't research every call. So you save the rigorous process for the decisions that look strategic and trust your team's judgment on the rest.
The problem: that judgment is wrong more often than anyone wants to admit.
When marketing teams have tested their micro-decisions with live behavioural experiments — the kind that measure real choices, not stated preferences — the results are consistently uncomfortable.
Roughly 40% of the time, the “obvious” answer loses. Not by a little. It finishes last.
The thing that was supposed to win — the higher discount, the features-led message, the broader audience — underperforms the alternative the team considered secondary, or didn't consider at all.
That means nearly half your unvalidated marketing decisions are actively wrong. Not unlucky. Wrong. Because what your team thinks customers want and what customers actually do are different things. This is the say/do gap — and it's playing out at scale, invisibly, across every marketing organisation running on survey data and instinct.
The marketers making these calls are skilled. Many are exceptional. But they're operating with fundamentally unreliable inputs — survey-based research tools that predict purchase intent at 20–30% accuracy — and making hundreds of consequential calls per month based on that signal.
This isn't a failure of judgment. It's a failure of infrastructure.
Live market experiments — two-day tests that put real offers in front of real audiences and measure actual behaviour — can resolve micro-decisions that would otherwise be expensive guesses. And synthetic personas trained on verified behavioural data can extend that accuracy to the decisions that need to move even faster. What used to require months of consumer research can now be resolved in 48 hours with behavioural-level confidence.
Here's a useful frame: the CFO validates financial assumptions before committing capital. Every major spend decision goes through a model, a forecast, a stress test. The logic is simple — when you're about to commit real money, you want evidence that the assumption it's built on is sound.
Marketing spend is subject to the same logic. Yet the assumptions underpinning most campaign decisions — the audience, the message, the offer — never see the inside of an evidence review.
The CMO should hold marketing assumptions to the same standard the CFO holds financial ones. Not for every decision, but for the decisions that cascade — where getting the upstream call wrong compounds the cost downstream.
Two days of live experimentation to validate the decisions that drive hundreds of thousands in spend isn't a luxury. It's the minimum standard for running a disciplined marketing function.
See how Heatseeker closes the say/do gap →
This article was adapted from a post originally shared on LinkedIn by Kate O'Keeffe, CEO & Co-Founder of Heatseeker.