Pharmaceutical Market Europe • February 2026 • 32 THOUGHT LEADER
IN ASSOCIATION WITH
By Dimitri Challouma
The question in healthcare is no longer whether generative AI (GenAI) is coming. It is already here. Tools such as ChatGPT, Gemini and Copilot are being used daily, sometimes officially, sometimes quietly, by clinicians, medical writers, communications teams and operational staff trying to manage growing workloads.
Industry analysis from organisations including McKinsey, Deloitte and Gartner consistently highlights healthcare as a sector where GenAI could have significant impact. That is not because the technology is novel, but because the system is under strain. Data volumes are increasing, time is limited and too much effort is still spent on administrative work rather than patient care.
Early attention has focused on clinical applications. GenAI is being explored to support image interpretation, screening programmes and complex procedures. Guidance from bodies such as the World Health Organization (WHO) and NHS England suggests AI-supported systems may help clinicians recognise patterns more quickly and support decision-making, provided they are used appropriately and with clear oversight.
In practice, however, some of the most immediate value sits outside direct clinical care. GenAI is particularly effective at tasks people do not want to spend time on: summarising long documents; drafting reports; supporting training materials and synthesising large volumes of information. Reducing this administrative burden matters. Every hour saved on paperwork is time that can be redirected towards patient care.
As adoption increases, the conversation is shifting. GenAI is powerful, but it is also fast-moving and easy to misuse. Two factors will determine whether organisations genuinely benefit from it: privacy and literacy.
Healthcare runs on trust. Patients expect their data to be protected, and clinicians expect the tools they use to be safe. While most organisations understand this in principle, risk often emerges through everyday behaviour rather than major system failures.
Public GenAI tools are easy to access. It is easy to imagine information being shared to generate a quick summary, without full consideration of where that data goes or how it might be reused. In other cases, AI-generated outputs are assumed to be safe, simply because they were produced by a tool. That assumption is often wrong.
Regulatory bodies including the WHO are clear that strong data governance is essential for AI in healthcare. But privacy cannot live only in policy documents. People need clear, practical guidance on what is allowed, what is not, and why.
This is where purpose-built tools matter. As GenAI use grows, healthcare teams need access to information that is not just fast, but trusted and traceable, particularly in complex areas such as rare disease. RAiRE, a rare-disease-specific large language model developed by Havas Life London, is designed with this challenge in mind. Rather than relying on the open web, RAiRE provides structured, trusted and citable information when questions are asked, helping reduce the risk of misinformation, hallucination or inappropriate reuse.
Without privacy built in from the start, GenAI will not scale in healthcare. Trust is too important.
The second critical factor is literacy. Most GenAI tools sound confident and write fluently. They rarely flag uncertainty. This is where problems can arise.
Research from Gartner and Deloitte shows that many GenAI risks stem from over-trust in outputs rather than from technical failure. If users do not understand how these systems work and where they fall down, it is easy to treat them as an authority instead of an assistant.
GenAI literacy is not about turning everyone into a technologist. It is about teaching people how to sense-check outputs, recognise bias and know when human judgement must take precedence. In healthcare, that critical thinking is non-negotiable.
This cannot be limited to IT teams. Clinicians, medical affairs, commercial and communications teams all use GenAI differently. Training needs to reflect real workflows, not abstract AI theory.
GenAI has the potential to improve healthcare, not just through better insight, but by giving people time back. Whether it delivers on that promise depends less on the technology itself and more on how it is used.
Privacy and literacy are not barriers to innovation. They are what make it possible.
This thinking underpins Havas Life London’s work on RAiRE. In areas where information is complex and the cost of getting it wrong is high, RAiRE shows how GenAI, grounded in trusted and citable sources, can support understanding responsibly.
And in healthcare, trust is everything.
Dimitri Challouma is Digital Creative Deirector at Havas Life London