Executive Summary
Synthetic consumers and synthetic responses are moving fast from “interesting experiment” to “default workflow” across insights and innovation teams.
Qualtrics reports 71% of market researchers believe synthetic responses will make up more than half of data collection within three years. (Qualtrics) Research Live also reports the same 71% expectation, plus 69% saying they used synthetic responses in the last year. (research-live.com) NielsenIQ describes synthetic respondents as AI-generated personas that can quickly evaluate and optimise concepts, especially early in innovation. (NIQ)
Speed and cost benefits are real. Risk is real too.
The new competitive edge for mid-sized CPG companies is not “using synthetic consumers.” The edge is building a repeatable discipline that stops synthetic outputs from becoming confident fiction.
This week’s post explains what synthetic consumers are actually good for, where they reliably fail, and how to build a practical “human truth firewall” using a CATSIGHT™️-style operating discipline so insight becomes action, not theatre.
If you prefer to listen rather than read check out the C3Centricity Podcasts on your usual channel.

The shift that is happening quietly, and fast
Synthetic respondents used to be a niche topic discussed by research agencies and tech vendors. That changed when generative AI became accessible enough to sit inside everyday research workflows.
Qualtrics’ 2025 Market Research Trends release states that 71% of market researchers agree that within three years, synthetic responses will make up more than half of data collection. (Qualtrics) Research Live reports the same result and adds a second signal that matters more for leaders: 69% claim to have used synthetic responses in the last year. (research-live.com)
A pattern shows up repeatedly when new capability arrives quickly: teams adopt the tool before they build the judgement system around it.
That gap is becoming visible in two ways:
- Teams are using synthetic outputs as “answers,” not as directional input.
- Leaders are starting to ask for proof, transparency and governance, because trust is fragile and mistakes are expensive.
The Market Research Society describes synthetic respondents as AI-generated personas participating in surveys or qual style exercises without a real person behind them. (Market Research Society) NielsenIQ frames synthetic respondents as “stand-in consumers” that can provide feedback quickly, with the current sweet spot being early concept learning and optimisation. (NIQ)
Useful, yes. Sufficient, no.
Your decision check: which decisions will your organisation allow an AI-generated “consumer” to influence, and which decisions still require real humans.
Where synthetic consumers genuinely help CPG teams
Speed is the obvious advantage. Cost is the second. The third is often overlooked: synthetic can reduce organisational friction because it provides directional signals faster than traditional research cycles.
NielsenIQ points to concept evaluation and optimisation as a strong current application. (NIQ) Ipsos also positions synthetic data as valuable in product testing contexts where traditional research is resource intensive. (Ipsos)
That matters for mid-sized CPG companies because innovation bottlenecks tend to sit in three places:
- Too many concepts enter the funnel with weak briefs.
- Too much money gets spent validating obvious failures.
- Too few teams learn quickly enough to improve a concept before it gets killed.
Synthetic can improve the first and second problems if used correctly.
Narrative that will feel familiar: a team has six concept routes and enough budget to validate one properly. Everyone argues. The one concept that survives is the one with internal sponsorship, not the one with the strongest consumer logic.
Synthetic can help by stress testing assumptions early, giving the team a shortlist worth taking to humans.
Comment that matters: synthetic is brilliant at narrowing, not finishing.
Where synthetic consumers fail, even when they sound confident
The danger is not that synthetic outputs are always wrong. The danger is that they are often plausible.
Three failure modes show up repeatedly.
1) Synthetic outputs amplify whatever went into the model
A synthetic “consumer” is not a human with lived experience. It is a statistical and linguistic reflection of patterns, data sources, and prompts.
ESOMAR’s synthetic data guidance focuses heavily on transparency, benchmarking, and conditions under which synthetic boosting should be used, including the importance of sufficient base sizes and penetration constraints for quality guarantees. (ANA – ESOMAR)
That emphasis exists for a reason: synthetic can look clean while hiding structural weakness.
Decision check: does your team know what data the model learned from, and what it never saw.
2) Cultural nuance and emotion get flattened
CPG wins and loses in micro-moments: shame, pride, belonging, comfort, identity, care, fear of judgement, fear of waste, fear of “getting it wrong.”
Those are rarely captured well through synthetic abstraction, especially across markets where language carries status, humour, and cultural meaning that does not translate literally.
Comment that matters: global growth depends on local nuance. Synthetic can help you scale early directional learning, but it cannot replace the felt sense of why something matters to someone.
3) Synthetic data loops can degrade reality
A separate but connected risk shows up when synthetic data gets used to train systems that then generate more synthetic data.
Wired reported on the growing use of synthetic data and also flagged expert concern around over-reliance and feedback loops that can degrade model performance if synthetic gets repeatedly recycled without fresh real-world grounding. (WIRED)
This becomes a strategic issue for insight teams: reality drift.
Comment that matters: once reality drift enters a business, the organisation becomes more confident and less correct.
4) Panel contamination is no longer a theory
A final risk is uglier: synthetic “humans” can also enter research samples and contaminate them.
The Times reported a study where AI bots could pass detection checks in online surveys at extremely high rates, raising serious concerns about survey integrity. (The Times)
That changes the quality conversation. “Real people” is no longer guaranteed just because a survey was fielded.
What verification and quality controls do your partners use, and what do you require.
The Human Truth Firewall: a practical operating discipline
A firewall is not a slogan. It is a set of gates that stop weak signals becoming business decisions.
CATSIGHT™️ the action insight process from C3Centricity, is built for exactly this moment because it forces discipline from objective to action to human truth.
Let me explain this is a little more detail.
Gate 1: Category and brand definition
Synthetic outputs improve when the prompt has boundaries. The broader the prompt, the more generic the output. Make it as precise as you can.
Teams often ask synthetic consumers questions that are really executive questions, not consumer questions. The model responds with something polished, and the team thinks they have clarity.
But you should always ask “Does the prompt define category, occasion, target consumer, context, and competing alternatives?”
Garbage prompts create beautiful garbage answers; unfortunately few challenge what is delivered.
Gate 2: Consumer Aim or Objective
Qualtrics’ data suggests synthetic will dominate a large share of data collection soon. (Qualtrics) That means teams will face more synthetic output volume than ever.
More output does not create more truth. Clear consumer aims do.
So ask yourself: “What is the consumer trying to achieve in this moment?” and what would success feel like?
Aims rather than business objectives make insight sharper. Without them, you risk everyone debating their mixed opinions.
Gate 3: Get intimate with the consumer
NielsenIQ positions synthetic respondents as particularly useful early in the innovation cycle. (NIQ) That implies real intimacy still matters when stakes rise.
Use synthetic to generate hypotheses, then use human contact to find what is emotionally true.
Ask yourself which parts of the decision require emotion, identity, or social context.
Synthetic consumers may accelerate the questions, but only humans can reveal why.
Gate 4: Gather more to fill information gaps
ESOMAR guidance stresses benchmarking and quality guarantees. (ANA – ESOMAR)
A strong team treats synthetic as one input, then gathers targeted evidence where it is weak.
Ask yourself: “what is missing, and what is the cheapest real-world way to close the gap?”
It is far better to run three small human checks than one big synthetic assumption.
Gate 5: Find a Human truth to build the insight on
Ipsos notes synthetic data can create new possibilities, particularly in product testing, but effects and nuances must be understood in specific contexts. (Ipsos)
Human truth is not “people want convenience.” A Human truth is specific and emotionally anchored. Ideally you should phrase it as “I want… or I need … and use consumer vocabulary rather than your own internal industry words.
Then ask yourself: “what is the tension the consumer lives with, and what relief are they buying?”
Human truths are the difference between an idea that tests and an idea that sells. They are that important for successful brand building.
Gate 6: Take action based on findings
Research Live reports high current usage of synthetic responses. (research-live.com) The organisational temptation is to treat that as a replacement for action planning.
Insight work without a Monday plan is organisational entertainment.
Ask yourself: “who changes what, by when, and how success is measured?” This prompts the action, without which an insight remains a simple daydream.
Every insight should have an owner, a deadline, and a behaviour metric. That is how you make them truly actionable.
Gate 7: Support from a team
Industry bodies are tightening guidance and ethics as synthetic grows. Economic Times reported MRSI adopting the ICC/ESOMAR 2025 Code, signalling stronger emphasis on ethics and transparency in an AI-shaped insights industry. (The Economic Times)
Governance is not a compliance tax. It is what protects credibility.
Ask: “who signs off synthetic usage rules, and who audits them.”
Teams with clear rules move faster because they argue less.
The 90-day rollout plan mid-level leaders can actually implement
Days 1 to 15: Define where synthetic is allowed
- Allowed: early concept screening, hypothesis generation, exploratory segmentation drafts.
- Not allowed: final go to market decisions, sensitive claims, cross-cultural messaging without human validation.
Days 16 to 30: Build your prompt library and quality checklist
Use one-page templates that force:
- category and occasion definition
- target consumer boundaries
- excluded contexts
- what would change the team’s decision
Days 31 to 60: Run a “two-track” learning sprint
Track A: synthetic for speed
Track B: human verification for truth
Aim for small, fast human checks:
- shop-alongs or remote diaries in two markets
- rapid qualitative research with real consumers on emotional language
- retailer perspective checks where relevant
Days 61 to 90: Lock governance and publish the playbook
A lightweight playbook wins:
- when to use synthetic
- minimum transparency requirements
- how results must be benchmarked
- required human validation gates
Remember: speed comes from clarity, not from skipping the truth.
Closing thought
Synthetic consumers are here. That is not the debate.
The debate is whether your organisation will use them to become faster and smarter, or faster and sloppier.
A human truth firewall lets you keep the speed while protecting the meaning.
That is how mid-level insight and innovation leaders become the people who restart momentum inside the business, and not just deliver more output.
If your insight development process needs upgrading, why not check out our online course on a best practice process for delivering actionable business insights?








