1. Healthcare AI Classification Under Annex III
The EU AI Act's high-risk classification in healthcare operates through two pathways. The first is the Annex II pathway: AI systems that are themselves medical devices under MDR (Regulation 2017/745) or in vitro diagnostic devices under IVDR (Regulation 2017/746) are automatically classified as high-risk. The second is the Annex III pathway, which lists specific application domains.
For healthcare, the most relevant Annex III categories are:
- Point 2(a): AI intended as a safety component in the management of critical digital infrastructure — relevant for hospital IT systems where AI manages critical patient monitoring infrastructure
- Point 5: AI in access to essential services — relevant for AI that gates access to healthcare services or prioritizes care allocation
- Annex II (MDR/IVDR products): AI embedded in or constituting medical devices — the primary classification pathway for diagnostic and clinical AI
Classification Practical Note
The most consequential classification decision for healthcare organizations is whether an AI system constitutes a medical device under MDR. This is a question of intended purpose: if the AI provides diagnosis, prognosis, monitoring, or treatment recommendations for individual patients, it is very likely a medical device under MDR and therefore high-risk under EU AI Act Annex II. The EU MDR's definition of medical device is broad and regularly captures AI-powered clinical decision support tools that developers did not initially consider medical devices.
2. MDR/IVDR Intersection: Where EU AI Act Overlaps
The Medical Device Regulation (MDR) and In Vitro Diagnostic Regulation (IVDR) were developed before the EU AI Act and have their own comprehensive conformity assessment frameworks. The EU AI Act recognizes this and establishes a modified conformity assessment procedure for AI systems that are medical devices — Article 40 creates a coordination mechanism to avoid complete duplication of effort.
However, “not completely duplicative” does not mean “automatically satisfied.” Healthcare organizations must map their AI medical devices against both frameworks and identify the gaps. Key areas where EU AI Act adds obligations not covered by MDR/IVDR include:
Article 9 Risk Management System
EU AI Act requires a continuous, documented risk management system. MDR requires clinical evaluation — different scope, different documentation structure.
Article 14 Human Oversight
EU AI Act Article 14 is more prescriptive than MDR usability requirements. Specific oversight procedures, designated oversight persons, and override logging are EU AI Act-specific obligations.
Article 49 EU Database Registration
The EU AI Act requires registration in the EU AI Act database — a separate registry from EUDAMED (the MDR device registry). Dual registration is required.
GPAI Model Transparency
If the AI medical device is built on a GPAI foundation model, additional transparency obligations under Title VIII apply — these have no MDR equivalent.
3. High-Risk Healthcare AI: Imaging, CDS, and Triage
The following healthcare AI use cases are clearly high-risk under the EU AI Act and require the full Article 9–15 compliance program plus conformity assessment:
Diagnostic Imaging AI
AI systems that analyze radiological images (CT, MRI, X-ray, PET) to detect pathologies, measure lesions, or classify findings — including AI-powered radiology reading assistance tools, cancer screening AI, and ophthalmology screening AI. These systems are medical devices under MDR and high-risk under Annex II EU AI Act.
Clinical Decision Support (CDS)
AI systems that recommend specific treatments, drug dosages, or clinical pathways for individual patients based on their clinical data. CDS tools that provide patient-specific clinical recommendations — rather than general medical information — are medical devices under MDR. The distinction between general medical information (not a medical device) and patient-specific recommendations (medical device) is the critical classification decision.
Patient Risk Stratification
AI systems that assign patients to risk categories to prioritize clinical interventions — sepsis prediction models, deterioration risk scores, 30-day readmission risk models. These systems influence clinical care decisions at the individual patient level and are medical devices under MDR where their output is intended to drive clinical action.
AI in Drug Discovery and Development
AI systems used in drug discovery (target identification, molecular modeling) are generally not medical devices and may not be high-risk under the EU AI Act — they do not directly interact with patients. However, AI systems used in clinical trial design that influence patient inclusion/exclusion decisions may cross into high-risk territory.
4. Notified Body Requirements for Healthcare AI
For AI medical devices, both MDR and the EU AI Act may require third-party conformity assessment by an EU-designated notified body. Understanding when a notified body is mandatory — and for which regulatory framework — is essential for planning compliance timelines and budgets.
Under MDR, notified body assessment is required for Class IIa, IIb, and III devices. Class I devices with a measuring function or sterile devices also require notified body involvement for specific aspects. Most AI-powered diagnostic tools fall in Class IIa or higher under MDR risk classification rules.
Under the EU AI Act, the Annex VII conformity assessment procedure (third-party assessment) is required for high-risk AI systems that are not covered by an existing EU harmonized standard. For AI medical devices that have already undergone MDR notified body review, EU AI Act Annex I Article 40 provides a coordination mechanism — but this does not automatically eliminate the need for EU AI Act-specific assessment; it streamlines the process.
Notified Body Timeline Planning
EU notified body capacity for both MDR and EU AI Act assessments is severely constrained. Healthcare AI developers should plan for:
- ▸6–12 months lead time to secure a notified body designation for MDR assessment
- ▸Additional 3–6 months for EU AI Act-specific documentation review
- ▸Technical file preparation: 6–9 months for a complex AI medical device
5. Article 14 Human Oversight in Clinical Settings
Article 14 of the EU AI Act is the provision with the most direct operational impact on healthcare AI deployment. It requires that high-risk AI systems be designed to allow effective oversight by natural persons — and that deployers (hospitals and healthcare institutions) implement human oversight in practice.
In clinical practice, Article 14 means:
- Clinicians must be able to understand AI outputs: what the system recommends and the basis for the recommendation
- Clinicians must be able to override AI recommendations without organizational or system barriers to override
- Override decisions must be logged — when clinicians deviate from AI recommendations, the deviation and its clinical rationale should be recorded
- Designated qualified persons must be responsible for human oversight — this cannot be diffused across an entire department without accountability
- Clinicians using high-risk AI must receive adequate training on the AI system's capabilities and limitations
Patient Safety Implication
Article 14 is a patient safety provision first and a compliance provision second. AI systems that produce recommendations clinicians cannot understand, question, or override in practice create patient safety risks that EU AI Act enforcement will target directly. Healthcare organizations should treat Article 14 compliance as a clinical governance requirement, not merely a legal checkbox.
6. Clinical Validation vs Regulatory Validation
A critical distinction for healthcare AI compliance is the difference between clinical validation and regulatory validation. Healthcare organizations and AI developers sometimes conflate these — assuming that a clinically validated AI system is also regulatory-compliant. This assumption is incorrect and can lead to significant compliance gaps.
Clinical Validation
- Demonstrates clinical effectiveness in target population
- Measures diagnostic accuracy, sensitivity, specificity
- Conducted through clinical trials or prospective studies
- Peer-reviewed publication of results
- Endpoint: does the AI improve clinical outcomes?
Regulatory Validation (EU AI Act)
- Demonstrates compliance with Article 15 accuracy requirements
- Documents risk management system (Article 9)
- Validates data governance procedures (Article 10)
- Confirms human oversight measures (Article 14)
- Endpoint: does the AI meet EU AI Act obligations?
A well-validated clinical AI system may still fail regulatory validation if its technical documentation is incomplete, its risk management system is not structured per Article 9, or its human oversight procedures are not implemented. Healthcare organizations should run clinical and regulatory validation as parallel but distinct workstreams.
7. GDPR Article 9 + EU AI Act: Double Compliance for Health Data AI
Health data is a special category of personal data under GDPR Article 9, requiring explicit consent or another Article 9(2) legal basis for processing. AI systems that process health data for training, validation, or inference must satisfy both GDPR Article 9 requirements and EU AI Act Article 10 data governance requirements.
Key intersection obligations include:
- Data minimization: GDPR requires processing only necessary data; EU AI Act Article 10 requires training data to be relevant and representative — both push toward careful data selection
- Purpose limitation: GDPR restricts use to specified purposes; EU AI Act training data must be for the specific intended purpose of the AI system
- DPIA: GDPR requires a Data Protection Impact Assessment for high-risk health data processing; EU AI Act Article 9 risk management system serves a different but overlapping function
- Secondary use: Using patient data from clinical care to train AI models requires separate legal basis under GDPR Article 9 — clinical care consent does not extend to AI training
Healthcare organizations should ensure that their EU AI Act compliance programs are built with legal counsel who can address both EU AI Act and GDPR Article 9 obligations simultaneously. A compliance program that satisfies EU AI Act Article 10 but relies on an invalid GDPR Article 9 legal basis for training data is not compliant.
8. TraceGov.ai Healthcare Compliance Pathway
TraceGov.ai provides a dedicated healthcare compliance pathway that addresses the EU AI Act + MDR/IVDR + GDPR triple compliance burden. The pathway is pre-mapped to ISO 14971 (medical device risk management), ISO 62304 (medical device software lifecycle), and IEC 82304 (health software) — enabling healthcare AI developers and deployers to build EU AI Act compliance programs that align with existing healthcare quality management frameworks.
- Article 9 risk management system templates aligned with ISO 14971 medical device risk management
- Article 11 technical documentation generator cross-referenced to MDR Annex II technical file requirements
- Article 14 human oversight implementation guide for clinical environments
- Dual registry workflow: EUDAMED (MDR) + EU AI Act database (Article 49)
- GDPR Article 9 and EU AI Act Article 10 data governance cross-mapping
- TAMR+ regulatory Q&A for healthcare AI regulatory questions (74% benchmark accuracy)
9. Frequently Asked Questions
Which healthcare AI systems are classified as high-risk under the EU AI Act?▾
How does the EU AI Act interact with MDR and IVDR?▾
What are the Article 14 human oversight requirements for healthcare AI?▾
Do AI wellness apps and general health chatbots need EU AI Act compliance?▾
How does TraceGov.ai support healthcare AI compliance?▾
Related EU AI Act Guides
The Complete EU AI Act Compliance Guide 2025–2027
The definitive pillar guide covering timelines, classifications, penalties, and compliance roadmap
High-Risk AI Systems Classification
Complete Annex III and Annex II classification guide with sector-specific examples
Human Oversight of AI: EU Obligations
Deep dive into Article 14 human oversight requirements across all high-risk AI categories
AI Risk Assessment Framework
Cross-sector AI risk assessment methodology aligned with EU AI Act
