Pharmaceutical Market Europe • April 2025 • 33-35
AI AND MEDICAL INNOVATION
Ensuring AI tools comply with strict standards while promoting fairness, accountability and reliability
By Patrice Navarro
Artificial intelligence (AI) is no longer a distant promise in healthcare. It is already redefining the way diseases are detected, treatments are developed and patients are cared for. From enhancing radiology with unparalleled diagnostic accuracy to enabling real-time remote patient monitoring, AI is accelerating medical progress across Europe. In drug discovery, machine learning models are transforming pharmaceutical research, shortening development cycles and unlocking new therapeutic possibilities.
But the legal and regulatory framework is evolving to keep pace. The EU AI Act, GDPR and medical device regulations are laying the foundation for a structured yet innovation-friendly governance model. Legal professionals play a key role in ensuring that AI applications meet the highest standards of safety, transparency and ethical integrity.
A key takeaway from the AI Action Summit in Paris earlier this year was the ongoing tension between regulation and innovation. While perspectives differ, we believe that the core principles remain investment, trust and ethics. Regulation, when well-designed, is not a constraint but a means to foster confidence in AI-driven healthcare.
The challenge for European regulators is then to maintain proportionate oversight, encouraging AI adoption while safeguarding patient safety and European ethical standards without falling into the trap of over-regulating.
This article explores the interplay between AI-driven innovation in healthcare and the evolving European regulatory framework, examining how regulatory compliance acts as a pillar of trust, the classification of AI as a medical device, the complexities of processing sensitive health data, and the evolving legal landscape surrounding liability and accountability.
The EU has adopted a structured framework, seeking to align innovation with ethical principles, patient safety and data protection. The EU AI Act, alongside sector-specific regulations such as the Medical Device Regulation (MDR) and directive 2001/83/EC on medicinal products, establishes a layered regulatory environment. The goal is to ensure AI tools comply with strict standards while promoting fairness, accountability and reliability. Protection of sensitive health data remains a priority under GDPR, reinforcing lawful data usage and security measures. The EU recognises that AI, especially in healthcare, is fundamentally about trust.
For high-risk AI systems, the EU AI Act imposes additional requirements. These include transparency and explainability obligations, ensuring AI-generated outputs remain interpretable for clinicians. Risk management and human oversight mechanisms mandate real-time monitoring, bias detection and intervention protocols to prevent errors. Data governance and security provisions align with GDPR and cybersecurity standards, ensuring lawful data processing and patient privacy protection.
Artificial intelligence used for medical purposes generally falls under the medical devices regulation (MDR) (EU) 2017/745 or the in vitro diagnostic regulation (IVDR) (EU) 2017/746. AI-driven software – classified as software as a medical device (SaMD) – is subject to risk-based classification, with medium- to high-risk applications requiring strict regulatory scrutiny.
The US currently adopts a pragmatic approach, regulating AI technologies within existing statutory frameworks while ensuring safety, effectiveness and transparency. The idea is to balance regulatory oversight with flexibility in the hope to accommodate rapid AI advancements in healthcare.
In continental Europe, the EU AI Act builds on this framework, imposing sector-specific requirements on AI systems in healthcare, particularly those classified as high risk. This classification applies when AI is a safety component of a medical device or when it is integral to clinical decision-making. Unlike the MDR, which classifies medical devices based on clinical risk, the AI Act automatically considers AI-based medical devices as high risk if they require third-party conformity assessment.
To ensure safety and reliability, high-risk AI systems must comply with transparency and explainability obligations, enabling clinicians to interpret AI-generated recommendations. Risk management and human oversight provisions require continuous monitoring, bias detection and the ability to intervene if an AI system produces erroneous results. Data governance and security requirements mandate compliance with GDPR and cybersecurity standards.
Additionally, post-market surveillance obligations under MDR and the AI Act require continuous monitoring of AI performance after deployment. The EU AI Act also facilitates dual conformity assessment, allowing manufacturers to undergo a single evaluation process under both MDR and the AI Act – provided the designated assessment body is authorised under both frameworks.
AI-driven healthcare solutions rely on large-scale data sets to improve accuracy and patient outcomes. However, the use of sensitive health data raises complex legal and ethical challenges, particularly under GDPR. While GDPR establishes high privacy standards, its application to AI models requires careful assessment to balance compliance with innovation.
Health data is classified as sensitive under Article 9 GDPR, meaning its processing is generally prohibited unless a specific exemption applies. AI development in healthcare must carefully identify a valid legal basis for processing, but these exemptions were not designed with AI’s evolving nature in mind. Additionally, GDPR provisions on data subject rights pose challenges, particularly regarding access, rectification and the right to explanation in AI-driven decisions.
However, by adhering to regulatory guidance from authorities such as the French data protection authority (CNIL) and implementing robust controls, developing AI tools in compliance with GDPR remains achievable.
Safeguards such as data minimisation, pseudonymisation and federated learning help AI developers navigate compliance while preserving efficiency. Regulatory sandboxes also offer controlled environments where AI models can be assessed under real-world conditions while remaining within legal boundaries.
Beyond GDPR, compliance with Article 10 of the EU Data Act is essential for AI systems using public or privately held health data. This provision mandates clear access rights, interoperability standards and safeguards against discriminatory data practices. By aligning AI development with these regulatory frameworks, the EU aims to create an environment that fosters innovation while reinforcing trust in AI-driven healthcare.
Early regulatory proposals, such as the now-withdrawn AI Liability Directive, raised concerns about stifling innovation with excessive legal burdens. The European Commission’s decision to cancel the AI Liability Directive represents a positive sign in the ongoing effort to combat over-regulation and simplify the legal framework for AI adoption.
AI-related liability is now addressed under the Product Liability Directive (PLD) and the EU AI Act, creating a legal structure that holds developers, hospitals and healthcare providers accountable without hindering progress. This streamlined approach provides clearer guidelines while avoiding the regulatory overlap that could have resulted from an additional liability directive.
Risk management strategies, compliance protocols and robust AI governance help mitigate liability risks within this more straightforward regulatory environment.
At the same time, AI-driven healthcare solutions must navigate strict data protection laws. Progress at both the legislative, regulatory and authorities guidance levels, combined with new technical approaches, is demonstrating that compliance is achievable. Authorities provide guidance on lawful AI deployment, while innovations such as federated learning and privacy-preserving AI enhance model accuracy limiting the processing of personal data. Techniques like advanced anonymisation, encryption and synthetic data generation further ensure AI tools respect patient privacy while aligning with EU digital health initiatives.
It is obvious that AI is fundamentally transforming healthcare, unlocking medical breakthroughs that were previously unimaginable. While regulatory challenges exist, they are not obstacles but enablers, ensuring AI-driven solutions are safe, transparent and trusted. Without the trust and ethical foundations established by these regulations, both healthcare professionals and patients will hesitate to adopt even the most promising technologies, rendering substantial investments ineffective.
However, regulation must strike the right balance. Over-regulation risks becoming counterproductive, deterring investments and slowing down innovation at a time when Europe aims to lead in AI-driven healthcare. This is why the sector is also counting on reasonable guidance from authorities and future legislative refinements to ensure proportionate oversight. A framework that is too rigid or burdensome will not only hinder the deployment of cutting-edge medical solutions, but could push AI development elsewhere, limiting Europe’s ability to shape global AI standards in healthcare.
AI is not just the future of healthcare; it is the present. The real question is how fast companies can adapt to leverage AI’s full potential while being compliant and responsible.
Patrice Navarro is a partner at Clifford Chance’s Tech Group