Pharmaceutical Market Europe • January 2026 • 28-29

AI AND PATIENTS

Designing healthcare AI that works for every patient

As generative AI becomes more embedded in clinical decision-making, ensuring safety, accuracy and relevance is critical

By Tim Morris

Image
Image

The healthcare sector is evolving at an extraordinary pace, driven by new technologies and a rapidly expanding body of clinical knowledge. Healthcare professionals (HCPs) are under growing pressure to stay current with massive growth in information, emerging innovations and shifting best practices, all while managing rising patient demands. Among these innovations, artificial intelligence (AI) is transforming how diagnosis, treatment and care are delivered. AI tools have the potential to support clinical decision-making, streamline workflows and enable more personalised care.

However, the adoption of AI in healthcare brings challenges. These include the risk of clinical errors, misinformation from generative AI, limited transparency in how outputs are generated and algorithmic bias that could exacerbate existing health disparities. To mitigate these risks, AI technologies must be rigorously validated, responsibly deployed and thoughtfully integrated into clinical workflows.

To fulfil AI’s potential, it must be designed to work for every patient, not just those who are well-represented in training data sets. This calls for a thoughtful coordinated approach to development and implementation. It includes designing clinical decision support (CDS) systems that reflect the diversity of patient populations, integrating AI in ways that enhance care without introducing new barriers and ensuring HCPs are equipped to use these tools effectively.

Why we need effective clinical decision support systems

Healthcare systems globally are under immense pressure, from ageing populations and rising chronic disease burdens to increasing expectations for personalised care and digital transformation. These systemic challenges are compounded by workforce shortages, which threaten the sustainability and quality of care delivery.

The shortage of nurses in the UK is no secret. The NHS Long Term Workforce Plan aims to grow the workforce from around 350,000 to around 550,000 in 2036/37. However, analysis by the Royal College of Nursing shows that the plan is not yet impacting current numbers. This shortage, alongside the mounting patient demand in primary care, where general practitioners are seeing nearly half of the country’s population each month, underscores the scale of the challenge facing the UK healthcare system. With these workforce pressures and relentless patient demand, it becomes increasingly difficult for the workforce to deliver personalised care, engage in professional development or maintain a sustainable work-life balance. The resulting cognitive load can erode empathy, reduce decision-making confidence and compromise the quality of patient interactions.

CDS tools are digital systems that help the healthcare workforce to make informed decisions at the point of care. By surfacing relevant, evidence-based medical content
in response to clinical queries, these tools can help to ensure consistency and quality across multidisciplinary teams.

When powered by technologies like generative AI, CDS tools can rapidly summarise high-quality, peer-reviewed information to support care delivery and enhance collaboration.

However, their impact depends on clinical accuracy, seamless workflow integration, user trust and the quality and breadth of the sources it pulls information from, ensuring it reflects the diversity of patient populations.

These tools can be a lifeline for HCPs at the point of care. In order for them to be used effectively, they must be intuitive, trustworthy and capable of delivering clinically relevant insights. This means understanding the unique needs of each patient and helping HCPs make informed decisions quickly and confidently. Crucially, these tools must be trained and tested on data that reflects the full spectrum of patient experiences, including those from marginalised and underserved communities. They should also be adaptable to different clinical contexts, recognising that healthcare delivery varies across regions and systems.

Embedding evaluation into inclusive AI design

As generative AI becomes more embedded in clinical decision-making, ensuring safety, accuracy and relevance is critical. Grounding outputs in trusted, evidence-based sources helps reduce misinformation and bias, while supporting transparency through citable, traceable answers. Retrieval-augmented generation architecture strengthens this approach by linking AI outputs to authoritative content, mitigating risks associated with pure foundational models and enhancing clinical reliability.

To safeguard patient care and maintain trust, healthcare organisations and developers must adopt rigorous evaluation frameworks that reflect the realities of clinical practice. These should assess not only technical performance but also how well AI tools support HCPs in diverse settings, specialties and workflows. Responsible AI principles such as explainability, fairness, human oversight and robust data governance are essential to ensure AI enhances, rather than undermines, clinical judgment.

Elsevier’s framework for evaluating generative AI solutions, including ClinicalKey
AI, offers one example. Developed with input from clinical experts across multiple specialties, the framework evaluates AI-generated responses across five dimensions: query comprehension; helpfulness; correctness; completeness and potential for clinical harm.

In addition, a clinician-in-the-loop methodology ensures that responses from generative AI solutions are reviewed by licensed experts, providing real-world relevance and helping developers refine tools to meet the needs of varied clinical environments.

While this framework is specific to Elsevier’s generative AI solutions, the broader principle applies across the industry. AI tools should be evaluated with clinical oversight and tested against a wide range of use cases to ensure they support equitable, high-quality care for all patients.

'AI tools have the potential to support clinical decision-making and enable more personalised care'

Making AI practical and usable in daily care

Beyond safety and inclusivity, AI tools must be practical. Developers must prioritise usability, accessibility and clinician empowerment. Seamless integration into daily workflows is key. When evidence-based insights are embedded directly into the systems HCPs use, they can make faster, more confident decisions without disruption.

Generative AI can support HCPs in identifying diagnoses, optimising treatment plans and generating personalised educational materials. These capabilities improve efficiency and strengthen communication between HCPs and patients, especially those who may struggle with complex medical terminology.

To avoid creating new barriers, developers must embrace user-centred design, transparency and continuous feedback. HCPs should be involved throughout the development process, helping shape tools that reflect real-world needs. Transparency about data sources and limitations builds trust and enables informed decision-making.

Training HCPs to use generative AI effectively

Even the most advanced AI tools are only as effective as the HCPs who use them. As generative AI becomes more embedded in clinical workflows, HCPs must also be equipped with the skills and confidence to use it safely and effectively. This includes understanding how AI outputs are generated, how to interpret them and how to apply them in context.

Healthcare organisations have a pivotal role to play. Just as they invest in digital infrastructure, they must also invest in workforce readiness. Training should be practical, tailored to clinical realities and inclusive of different roles and settings. It must go beyond technical instruction to foster critical thinking and build trust in AI systems.

To support this need, Elsevier has launched the Gen AI Academy for Health – a self-paced accessible course designed to help HCPs develop the knowledge and skills required to use generative AI responsibly in practice. Empowering HCPs through training is key to unlocking AI’s full potential. When HCPs understand and trust the tools they use, they can deliver safer, more personalised and more efficient care.

A future where AI works for everyone

AI is already being implemented across healthcare, but its rapid adoption must be matched by a commitment to safety, consistency and accountability. The path forward is not about replacing human judgment but enhancing it. AI should serve as a trusted partner, empowering HCPs, reducing friction and improving outcomes.

To achieve this, healthcare systems must prioritise standardised deployment, rigorous validation and clear regulatory frameworks that safeguard patient safety and equity. Initiatives like the Gen AI Academy and its evaluation framework demonstrate how industry can support responsible AI implementation grounded in evidence and clinical expertise.

We can build a future where healthcare AI truly works for everyone by investing in inclusive design, robust evaluation, clinician training and evidence-based content. This means supporting HCPs, respecting patient diversity and upholding the highest standards of clinical excellence.


Tim Morris is VP Commercial Global Nursing Solutions at Elsevier

0