When AI Makes Insight Cheap, Judgement Becomes Priceless for Leaders

Insights | Leadership | Strategy
AI and leadership

This post was prompted by the success of my keynote at IIeX APAC last month in Bangkok.

The topic was “Why Human Intelligence Will Decide Which Insight Teams Thrive. Enjoy.


If you prefer to listen rather than read:


The moment I worry the most about, isn’t the arrival of AI in a business. It’s what happens after the excitement dies down.

A senior team gathers around a screen. The insight lead walks them through a tidy narrative: consumer shifts, market signals, charts that look definitive. The work is sharp. The pacing is professional. The recommendations are sensible.

Someone says, “That’s helpful.” Another adds, “Good work.”

Then the room shifts. Priorities collide. A constraint appears that no dataset could have captured. A risk lands that nobody wants to own. Time tightens. The team defaults to a familiar decision because it’s safe, defendable, and quick.

The deck gets archived. The organisation moves on.

That pattern existed long before generative AI.

AI is about to make it far more common.

Not because AI is “bad,” but because AI floods organisations with polished analysis. Attention will not expand at the same rate as output. Decision friction will not disappear because a model wrote the summary.

That was the central point I made on stage at IIeX APAC: AI will do the processing. Human intelligence must do the deciding.


If you’d like a copy of the full presentation I gave at IIeX APAC last month, let me know.

Contact us white blue button

A twenty-year ambition meets a new reality

Insight leaders have been hearing the same strategic challenge for decades: stop being a reporting function and become a strategic partner. Industry studies have repeated the same promise for years, sometimes phrased as moving from “statistics to strategy” and from “insights to influence.”

The ambition was never wrong. The operating model often was.

Many organisations still keep insight work downstream. Teams deliver outputs. Decisions happen elsewhere. When that’s the pattern, even brilliant work can feel like background music.

One number from my talk captures the gap: only a minority of insight teams are viewed as true strategic partners.

That gap matters because it has survived waves of tooling, dashboards, and sincere efforts to improve storytelling. The constraint has rarely been technical capability. The constraint has been decision design, decision timing, and the confidence to make the work consequential.

AI does not fix that automatically. AI simply accelerates whatever system already exists.

The real risk is not replacement. The real risk is commoditisation.

Most organisations do not suffer from a shortage of information. They suffer from a shortage of clarity.

Generative AI increases the supply of “good enough” analysis: fast synthesis, tidy narratives, plausible recommendations. That changes the economics of insight work. Tasks that once took weeks can now be produced in hours, sometimes minutes.

The danger sits quietly inside that productivity boost.

Teams can start chasing volume and speed, then wonder why decision-makers feel less persuaded, not more. Leaders receive more outputs, yet feel less certain about what to do next. Insight becomes “content” rather than a capability that reduces risk and improves decisions.

This is why I kept using the phrase augmented intelligence on stage.

AI can be the engine. It cannot be accountable for where the business drives next.

The new scarcity: judgement leaders trust

As AI makes analysis cheaper, the premium shifts to human contributions that remain difficult to automate.

Judgement begins with framing. Many requests arrive as questions. Most business problems are decisions disguised as questions. A team that accepts the request at face value becomes an efficient supplier. A team that reframes the work into decision language becomes a partner.

Judgement also includes context. AI can summarise sentiment. It cannot reliably read the emotional constraints that shape adoption: fear of risk, status protection, political trade-offs, and the unspoken reasons a “right” answer will still be rejected.

Judgement requires reality-testing. AI outputs can be fluent, coherent, and still wrong in expensive ways. The more persuasive the output, the more valuable the human instinct to pressure-test it against category dynamics, operational constraints, and frontline truth.

Judgement finally shows up as decision design. Most insight work ends with recommendations. Leaders need something else: a path that makes a decision easier to make and easier to defend. The trade-off must be explicit. The risk must be named. The ownership must be clear. Without that, the organisation defaults to habit.

None of this is soft. It is all very commercial.

Organisations do not invest in insight to know more. They invest to place better bets and make fewer expensive mistakes.

A discipline that changes how insight work is treated

One simple practice can stop AI from turning into faster reporting.

Every time AI produces an output, every time research findings land on a slide, I push the conversation through three gates: meaning, consequence, ownership.

I expressed this on stage as “What, so what, now what?”

What: Meaning prevents teams from hiding behind charts.
So what: Consequence ties the work to a business choice.
Now what: Ownership turns a conversation into movement.

Teams that repeat this discipline become known for decision clarity, not content production. They get invited earlier. They shape the brief rather than receiving it.

Influence grows because uncertainty reduces where it matters.

What the audience response revealed

Conference feedback is usually polite and vague. My Talkadot results were not.

Every respondent rated the IIeX APAC session valuable, and almost all said they would like to hear me speak again. Relevance and actionability scored particularly strongly. Comments praised clarity and conviction rather than slide craft.

That response is a signal worth taking seriously.

Market appetite is shifting from “more evidence” to “better judgement.” AI increases the supply of evidence. Human intelligence must protect the value of decisions.

What strong teams do next

The next ninety days will shape reputations for the next five years, because AI adoption is moving faster than organisational habits.

Progress does not require a major transformation programme. Three shifts change perception quickly.

  1. Work needs to be framed as behaviour change, not information requests. Decision-makers rarely pay for knowledge. They pay for a shift in what consumers do, what customers buy, and what the organisation chooses to prioritise. My CATSIGHT discipline anchors on that principle: translating business objectives into the attitude and behaviour change required, then shaping insight to drive action.
  2. AI needs to be used as a force multiplier for exploration rather than an author of conclusions. Tools can widen options, test counter-arguments, scan for patterns, and speed up synthesis. Human judgement must choose the trade-off and carry the accountability.
  3. Teams should produce fewer “presentations” and more decision-ready outputs. Leaders act when the decision is clear, the trade-off is explicit, and the next step has an owner. That is what makes insight operational.

The bottom line

AI will make research and insight output abundant.

Businesses will still compete on something scarcer: judgement that leaders trust, expressed with clarity, grounded in reality, and converted into action.

That is why I believe the future belongs to teams who treat AI as something that augments human intelligence, not something that replaces it.


If you would like to have Denyse speak on this topic at your own offsite, meeting or event, please contact us for more details.

author avatar
Denyse Drummond-dunn

Similar Posts