top of page

Why Your Concept Testing Might Be Lying to You

  • Writer: Sam Blaxland
    Sam Blaxland
  • Mar 9
  • 5 min read

Updated: 6 days ago

There's a story I heard when I first started in market research that I often think about. In the 1970s, Mortein's iconic "Louie the Fly" (incidentally created in a taxi by the former adman and author Bryce Courtenay) allegedly bombed in concept testing. The agency stuck with it anyway, and it ultimately became one of Australia's most memorable advertising campaigns. Whether or not the story is true, it highlights a phenomenon I've observed across hundreds of concept tests: the methodology can matter more than the concept itself.


After years of testing everything from retirement villages to skincare products to power tools, I've become increasingly conscious of the ways concept testing can quietly go wrong. Not through obvious errors, but through subtle methodological choices that contaminate results in ways that only become apparent when you're watching closely. Here are four patterns I see repeatedly:


1. Liking a concept doesn't mean buying the product 

I once watched a woman get genuinely excited about a video conferencing concept. She could see herself using it "when I'm holidaying in Tuscany," she said enthusiastically. The concept scored well. The strat planner nodded sagely. The client was thrilled. 


Then someone asked: "How often do you holiday in Tuscany?" 


Once every few years, it turned out. Maybe. 


The gap between concept excitement and actual purchase behaviour can be staggering.


Concept testing illustration showing product evaluation and uncertainty before purchase decision

I've seen a study where the "definitely would buy" score of 36% translated to an actual take-up rate of 4%.

The problem isn't that people are lying - it's that they're projecting idealised futures onto new ideas. They're responding to novelty and an aspirational lifestyle they see themselves living, not the everyday life of a Tuesday morning with school lunches and late buses and mismatched socks. 


Good concept testing forces those trade-offs into view:

Illustration of consumer decision making balancing product appeal and real purchase behaviour
  • What does this replace?

  • What changes in your day?

  • What do you stop buying to make room for this?

These aren't abstract questions about innovation - they're concrete questions about behaviour. The difference between "would you use this?" and "what would you have to change in your life to use this?" is the difference between research that flatters a concept and research that predicts its success. 


2. Group dynamics can kill truth (or reveal it) 

Focus group discussion used in qualitative market research to evaluate new product concepts

Recently, I ran automotive groups testing a new SUV. A middle-aged, fairly blokey participant looked at the vehicle and said firmly: "I'd rather get a Prado."

Here was a man who clearly knew his stuff about 4WDs. The rest of the group immediately deferred to his authority. The conversation stalled. 


But before this point in the conversation – before we had even started talking openly - I’d gotten our participants to write down their own personal thoughts on the concept in silence. And I could see the notes of those closest to me told a very different story.  People actually liked the vehicle. They could see value in it. But they'd been ready to follow the confident voice in the room rather than trust their own judgment. 


When we pitch the idea of focus groups, researchers often talk about the value and open conversation. We espouse the idea of articulate and open-minded participants listening to one another and building each other’s responses to build a reasoned and rational consensus of whether a concept works or not.  


But in reality, people don’t act this way. I've watched the participants reshape an entire room's opinion in under two minutes. Some participants have more topic knowledge than others. Some are more opinionated than others. Some simply won’t listen to the thoughts and experiences of others.  

The question isn't whether to use groups, but how to capture genuine individual truth before social dynamics take over, then harness those dynamics productively rather than letting them distort the signal.

Get it wrong, and you either miss the real appeal or amplify false enthusiasm. Either way, you're making decisions on contaminated data. 


3. Low engagement categories need different rules 

Not every category has active problem-seeking behaviour. Most people aren't walking around frustrated by their laundry detergent. They have a brand they buy. It works well enough. They're not looking for innovation.


But I've watched concept tests where the moderator spends fifteen minutes discussing laundry frustrations before showing a concept that solves those exact frustrations. By the time participants see the solution, they've manufactured importance: the moderator has educated them about what they should care about in this category, trained them to articulate problems, and then shown them a concept that’s the solution to their woes.


This style of priming is especially toxic in low-engagement categories. People naturally want to be helpful in research. If you ask them what frustrates them about something, they'll find frustrations for you - even if those frustrations have never once influenced their actual purchase behaviour. Then you build a product solving problems that don't actually drive category decisions.


4. Testing the wrong thing at the wrong stage wastes millions 

Burning money illustration representing wasted investment from poor concept testing decisions

I've seen a new construction product that cost millions to develop, with a clever manufacturing process and beautiful engineering. By the time they came to concept testing, they'd already committed a massive budget to tooling, inventory and market preparation.


But the groups were clear that they didn’t like this new material. When pushed on what it would take to buy it, the answer was blunt:

"Make it out of wood." 

That's what testing too late looks like. The organisation had moved from exploration to commitment without checking whether the core concept solved a problem people actually cared about solving. At that point, research becomes political theatre - everyone's hoping we can help them market their way out of a product problem. Sometimes we can. Usually, we can't. And by then, course correction is catastrophically expensive. 


There's also the question of what you're actually testing:

Market research data analysis showing concept testing results and consumer behaviour insights
  • Is this product desirable?

  • Is it a proposition fit?

  • Is it advertising execution?


These are fundamentally different questions that require different methodologies at different stages. I've seen concepts tested as if they're advertisements - loaded with benefits, mood boards, selling language. Testing this tells you whether the persuasion works, not whether the underlying product solves a real problem. Test the wrong thing at the wrong stage, and you get beautiful research that answers the wrong question entirely.


So, what does good concept testing look like? 

Good concept testing gets to the truth. It explores a concept’s potential by basing it on people’s real-life behaviours, not their aspirational lifestyles.

Illustration of the research journey in market research from concept testing to consumer insights

It empowers participants to speak with an equal voice and not be afraid of being judged for it. It doesn't create problems for clients’ concepts to solve. And it tests concepts early, before major commitments, and separately from messaging.  


This also means a skillful judgment about what the moderator is seeing:

  • When someone's written response differs from what they say aloud ten minutes later, which one matters more?

  • When participants switch from scepticism to enthusiasm, is it a genuine discovery or an avoidance of disagreement with a loud voice in the group?

  • When a concept scores well but no one can articulate what it replaces in their life, what does that tell you? 


These aren't questions with formulaic answers. They require pattern recognition developed across hundreds of groups and dozens of categories, watching carefully for the signals that predict real-world performance versus the ones that just make everyone feel good in the room. 


So if that Mortein story is true, it suggests that sometimes concept testing can reject something brilliant. But more often, in my experience, concept testing fails by being too generous - by creating conditions that make everything look better than it is.

The methodology leads. The results flatter. And six months later, everyone wonders why the research didn't predict what happened in the market.

 

Comments


bottom of page