Pharmaceutical Market Europe • January 2025 • 30-32
GENAI AND HEALTHCARE
With intuitive interfaces, the ability to handle vast and complex unstructured data and impressive versatility, GenAI is gaining traction
By Guillaume Duparc and Klaus Boehncke
Healthcare has traditionally lagged behind other industries in adopting cutting-edge technology, but the rise of generative AI (GenAI) and large language models (LLMs) has sparked a wave of interest among clinicians and healthcare providers (HCPs).
While tools like natural language processing (NLP) and ‘traditional’ machine learning have already proven valuable in areas like medical dictation and radiology, LLMs – especially multimodal models – are creating a new level of excitement. With intuitive interfaces, the ability to handle vast and complex unstructured data and impressive versatility, these models offer potential that goes beyond established technologies, although concerns about accuracy and ‘hallucinations’ remain. Interest is growing among clinicians; in fact, recent surveys show that about 20-30% of physicians in the UK and US are already using some form of GenAI at least once a week. This article highlights the clinical use cases where GenAI is gaining the most traction, from in-person care to digital patient interactions.
Before the arrival of more recent LLM-based AI solutions such as ChatGPT, healthcare providers and clinicians primarily used natural language processing and machine/deep learning tools. For example, tools like Nuance Dragon Speech Recognition support dictation, note summarisation and structured data capture.
‘Recent surveys show that about 20-30% of physicians in the UK and US are already using some form of GenAI at least once a week’
Other established machine/deep learning tools are aimed at specific areas of clinical decision-making (eg, RapidAI for stroke care, Blackford for radiology, and Ada for primary care and rare diseases) and population health applications (eg, identifying patients suitable for prevention/trials/specific treatments).
LLMs such as those from OpenAI and others offer significantly broader potential and use case applicability, at least in theory, thanks to their improved user interface, text-based context understanding and ability to deal with large and unstructured data sets. On the other hand, accuracy and hallucination concerns remain high despite improvements in the quality of models used, whether via specialisation or overall architecture (eg, Retrieval Augmented Generation, which adds specific knowledge to the AI model; or output constraints).
What are the specific differences between these AI approaches and do these really matter?
GenAI utilises LLMs and so-called transformer models to generate human-like text and responses. It excels in language understanding and generation tasks, making it suitable for conversational agents and content creation. In healthcare, language capabilities are very important, as information is often conveyed in text form, eg, in spoken dialogue between the patient and physician, and stored in unstructured medical notes in electronic medical records. The most prominent example is OpenAI’s ChatGPT, which (as we noted above) is already heavily used in healthcare settings.
By contrast, traditional machine/deep learning systems often employ non-text neural network algorithms to analyse patterns or to predict outcomes. They are commonly used in radiology for image analysis, in cardiology for risk assessments, in emergency room admissions forecasting and population health management.
For example, Australian company Harrison.ai’s Analise Enterprise CXR, launched in 2020, can evaluate X-ray images to identify up to 124 findings. Interestingly, the company recently also introduced a radiology-specific foundation model ‘harrison.rad.1’ that can understand text and images and interact with radiologists’ text queries. Machine/deep learning systems often achieve high accuracy in specific tasks due to specialised algorithms and data sets. However, scaling this type of AI can be challenging because each algorithm addresses a particular problem, necessitating multiple models and complex IT and data integrations. This can create a jungle of ‘island solutions’ that are hard to manage for IT departments and has resulted in the emergence of so-called AI marketplaces (like an app store) with curated applications and a unified technology platform.
Generative AI, while currently less accurate for specialised clinical tasks, likely offers broader applicability and easier scalability. Its generalised models can be fine-tuned for various applications, potentially making deployment more manageable across different healthcare settings.
‘Accuracy and hallucination concerns remain high despite improvements in the quality of models used, whether via specialisation or overall architecture’
At the moment it is hard to predict how these technologies will evolve in the future, and it is entirely possible that both approaches will continue to advance and drive better health outcomes and more efficient operations, as we show below.
The interest in GenAI spans all key stakeholders in the healthcare ecosystem, namely patients, clinicians, providers and payer/regulators. We summarise some of the key use cases applications below (non-exhaustively).
Patients
Clinicians
Providers (operational effectiveness focus)
‘While healthcare in general may lag behind other industries in technology adoption, many European providers are already realising real-world benefits’
While GenAI is being tested in various areas, it’s not quite a ‘doctor in your pocket’. Due to the need for high accuracy and strict regulatory standards, most current applications focus on less critical, non-clinical tasks.
GenAI shines in areas like customer service, appointment scheduling, reminders and insurance checks – tasks where high precision isn’t always essential and human error rates are already notable. For instance, Worthwell Health’s AI chatbot now handles 94% of patient inquiries, and one of our clients has achieved 80% call centre coverage with GenAI, leading to major productivity gains.
GenAI is also proving effective in streamlining pre-/post-surgery consultations and gathering patient-reported outcome measures (PROMs), tasks traditionally handled by nurses or allied health professionals. In these cases, GenAI manages about 80% of the workload, achieving significant efficiency gains due to the relatively simple data integration requirements compared to clinical decision support.
On the clinical side, GenAI offers modest productivity improvements, with co-pilot tools saving physicians up to 20% of admin time, mostly through automated forms. Other tools, like asynchronous chat, hold potential to boost consultations per day. Even simple patient reminders have proven valuable in reducing no-show rates, especially in high no-show regions like the GCC.
For maximum impact, GenAI co-pilots need integration with patient and organisational data – a complex task, especially for large inpatient facilities with diverse IT systems. Smaller outpatient providers and physician practices, with fewer and simpler IT systems, may find GenAI deployment easier, provided they have the scale to justify it.
For now, precise clinical decision support and population health management remain better suited to traditional machine learning and analytics.
In summary, GenAI offers healthcare providers a compelling opportunity for strategic refresh and tangible impact. While healthcare in general may lag behind other industries in technology adoption, many European providers are already realising real-world benefits. In addition, it is also worth noting that when these intelligent systems are not deployed by their employers, clinicians will likely just start to leverage GenAI via their smartphones, with much less privacy protection and less integration of useful and important patient data.
As the saying goes, AI won’t replace healthcare providers, but those who embrace it will likely secure a competitive advantage.
References are available on request.
Guillaume Duparc is a Partner and Klaus Boehncke is Global Digital Health Lead and Partner at L.E.K. Consulting