Pharmaceutical Market Europe • February 2026 • 26 THOUGHT LEADER
THOUGHT LEADER IN ASSOCIATION WITH
‘The answer isn’t to switch AI off. It’s to be deliberate about where and how it’s used – and where humans step in’
By Matthew Hunt
In healthcare and other regulated environments, that jaggedness isn’t just inconvenient. It’s commercially risky.
Mollick’s argument isn’t anti-AI – far from it. He makes a strong case that the real value of AI comes when humans remain firmly in the loop: supervising, validating and stepping in precisely at the point where the model’s confidence starts to outrun its competence. The frontier becomes navigable not by pretending it’s smooth, but by placing judgement, expertise and guard rails exactly where they’re needed.
That way of thinking sits right at the heart of 11 Minds: the on-demand advisory board from 11 London.
Anyone who’s spent time experimenting with AI inside an organisation quickly runs into the same uncomfortable truth: hallucinations don’t disappear just because you ask nicely. They’re a structural feature of probabilistic systems.
In health, charity, energy, finance and other complex sectors, ‘mostly right’ simply isn’t a comforting benchmark.
The answer, though, isn’t to switch AI off. It’s to be deliberate about where and how it’s used – and where humans step in.
11 Minds is built around a tested, human-centred validation process designed specifically to reduce risk. Minds are trained on approved, relevant data sources. Thresholds are enforced so the system can say, ‘I don’t know’. Outputs are stress-tested against known questions and real-world responses. And answers are benchmarked against live panels of patients or professionals (depending on who you want in the room with you) to check they align with reality, not just plausibility.
That isn’t automation for automation’s sake. It’s AI with adult supervision.
Human intervention isn’t something we’ll remove once the models get ‘good enough’. As AI becomes more capable, the cost of its failures rises, so human judgement becomes more valuable, not less.
Creative disciplines make this painfully obvious. Anyone can generate an image with an ‘obvious’ prompt, but you get flat lighting, impossible lenses and the strange, synthetic sheen that screams, ‘AI did this’.
Art directors understand why. Lighting is intentional, lenses have physics and perspective carries emotion. Those things don’t come from prompts alone; they come from lived, professional knowledge. When art directors stay in the loop – guiding, correcting, refining – the output stops looking artificial and starts looking real.
The same principle applies across strategy, insight, compliance and training. AI accelerates, but humans decide.
11 Minds is patent-pending, not just because it uses AI but because of how it combines human validation with agentic workflows in a structured, repeatable way. The process matters as much as the technology.
We select and configure the Minds around real client needs. We set guard rails deliberately; validation and quality control are baked in, not bolted on. Clear escalation points ensure humans intervene where judgement, context or accountability are required.
That architecture turns AI from a clever experiment into something clients can genuinely trust.
The easiest things to automate are rarely the most valuable. We built 11 Minds the other way round. Use cases start with what clients actually struggle with, not what’s most convenient to demonstrate.
That’s why the Minds function as an on-demand advisory board rather than a single, generic assistant. Depending on the challenge, clients can convene Insight Minds to interrogate research, Commercial Minds to war game competitor responses, Brand Minds to test creative and tone, Compliance Minds to navigate codes and precedents, or Training Minds to practise real-world conversations.
You might not be able to pull together a focus group, an advisory board and a compliance panel at short notice. With 11 Minds, you can do exactly that – on demand.
By keeping humans firmly in the loop, validating outputs rigorously and focusing relentlessly on real client needs, 11 Minds turns AI’s current shortcomings into a commercial strength. It’s faster than traditional approaches, safer than naïve automation and – crucially – more useful.
Which, in the end, is what clients actually care about.
Reference
1 Mollick, E. (2024). Co-Intelligence: Living and Working with AI. Portfolio
Matthew Hunt is CEO of 11 London, 11 Minds (Health) and 11 Minds (Energy)