Pharmaceutical Market Europe • November 2025 • 13

MIKE DIXON

MIKE DIXON
PATIENT INSIGHT:
HUMAN VS AI HCPs

If we’re using AI to inform healthcare decisions, ethical guard rails are essential

Image
Image

Ask what topics to choose for any conference and there are two topics that never fail to come up – AI and patients. So, to recently introduce a virtual session entitled Understanding the Lived Experience of Patients: AI versus Human Insight1 felt like the golden ticket. And the discussion was certainly not disappointing.

In considering this topic, we of course need to probe not just the current or future capabilities of artificial intelligence, but also its limitations. Especially when it comes to understanding real people, living real lives, with real health challenges. This isn’t about pitching AI insight against human insight, but more about reflecting on how the two can, and should, intersect to benefit our understanding and therefore enhance our communications.

Let’s champion not having an answer

There is no doubt AI is already embedded in our daily work. Whether we are consciously using generative platforms or unknowingly interacting with back-end systems that process data. But that presence doesn’t necessarily provide clarity and there still is a degree of confusion about what AI actually is, what types exist and how they are being used in healthcare communications.

Publicly available large language AI models mainly draw from open internet sources. This instantly presents a challenge. We know that what is available is already biased; fake facts exist, diverse populations are not necessarily represented. We therefore need to be very cognisant of the limitations, with source data likely being incomplete, biased or simply wrong. Using these to consider patient insight, without a significant degree of human reflection, can mislead rather than inform.

However, examples exist where the technology has instead been populated using only rigorously curated data. One, recognised across Europe as a testament to the power of patient-led initiatives, is LupusGPT, developed by Lupus Europe. Besides the fact that the output is based on controlled data, what perhaps make this approach stand out is its honesty. If it doesn’t know the answer, it says so. That feels reassuringly refreshing in a digital world that has become all about having an answer, even if it’s a hallucination. If we are using AI to inform healthcare decisions, these ethical guard rails feel essential.

Mind the gap

A recurring theme in insight discussions is the issue of representation. AI is only as good as the data it’s trained on – and that data often excludes strong representation from the very communities we most need to understand. Underserved populations, marginalised groups and those who don’t typically engage with formal healthcare systems are frequently missing from the data sets that could be used to feed AI models. Among the examples are: teenage boys with chronic conditions; patients with cognitive impairments; individuals with terminal illnesses; pregnant women and sometimes women in general; specific communities; lower social economic cohorts.

This isn’t a technology problem; it’s a human one. These populations are not just difficult to access, their needs are potentially difficult to understand without human-led, empathetic approaches. It’s about recognising the full spectrum of lived experience, including those who struggle to articulate their conditions, those who avoid formal healthcare systems and those whose stories are rarely captured. The reality is that AI, in its current form, cannot reach these individuals in meaningful ways.

Embodiment

Perhaps more philosophical, but none-the less highly relevant, is the concept of embodiment – the experience of living in a body, with illness, pain and uncertainty. While patient-reported outcomes (PROs) offer some insight, they are often too narrow to capture the full reality.

We need to rethink how we collect and interpret patient data. Ethnographic research, phenomenological methods and deep qualitative interviews are all cited as essential tools for uncovering the subtle, often invisible aspects of illness. These approaches often start with curiosity rather than a hypothesis. That way research follows the patient’s lead and explores what emerges, uncovering insights that structured data collection will most likely miss.

This type of research requires human intuition, empathy and responsiveness. We may surmise that AI might one day be able to emulate these, but there is still the question of trust. Instinctively, patients are more likely to open up to a person than to a platform, and that matters.

Hybrid insight

This brings us to the definition of insight itself in the hybrid world. Historically, much of clinical research has relied on quantitative methods and narrow participant pools. Moving forward, we now need to expand our toolkit, embracing interviews, ethnography and phenomenological research as core components of insight generation.

We can be optimistic about our sector’s growing appetite for patient insight. But that has to be tempered with realism. We are making progress, but have a long way to go to truly understand patients. To achieve that we need to explore the things we’ve never even thought of and although AI may help us, it cannot lead.

There is certainly a synergistic partnership between AI and humans when it comes to patient insight. However, understanding patients isn’t just about data. It’s about dignity. And that always starts with listening.

1. An HCA virtual Shared Experience discussion. Facilitated by Kirsty Mearns, Mearns & Pike with panellists Tamás Bereczky, German Federal AIDS Service & EUPATI The Patients’ Academy and Kay Fisher, Experience Engineers


Mike Dixon is CEO of the Healthcare Communications Association and a communications consultant

0