1. What Makes an AI System “High-Risk”?
The EU AI Act (Regulation 2024/1689) designates AI systems as high-risk through two distinct pathways defined in Article 6:
Pathway 1: Safety Component of Regulated Products (Article 6(1))
An AI system is high-risk if it is intended to be used as a safety component of a product covered by existing EU harmonization legislation listed in Annex I, and that product requires a third-party conformity assessment.
This covers AI embedded in: medical devices (Regulation 2017/745), machinery (Regulation 2023/1230), toys (Directive 2009/48/EC), lifts, pressure equipment, radio equipment, civil aviation, motor vehicles, railway systems, and marine equipment.
Pathway 2: Annex III Standalone Systems (Article 6(2))
AI systems that fall within one of the 8 use-case areas listed in Annex III of the regulation are classified as high-risk, regardless of whether they are embedded in a physical product.
These are AI systems that operate in domains where errors or biases can significantly harm fundamental rights, health, or safety.
The distinction matters for compliance: Pathway 1 systems follow the conformity assessment procedures of their respective product legislation (with AI Act requirements layered on top), while Pathway 2 systems follow the AI Act's own conformity assessment procedures under Articles 43.
2. The 8 Annex III High-Risk Areas
Annex III of the EU AI Act defines 8 areas of high-risk AI use. Understanding these categories is essential for determining whether your system requires full compliance with the high-risk requirements (applicable from 2 August 2026).
Area 1: Biometrics
AI systems intended for:
- Remote biometric identification (not real-time in public spaces — that is prohibited)
- Biometric categorization based on sensitive or protected attributes
- Emotion recognition in contexts not covered by the prohibition
Note: Real-time biometric identification in public spaces for law enforcement is prohibited under Article 5, not merely high-risk.
Area 2: Critical Infrastructure
AI systems used as safety components in the management and operation of:
- Road traffic and supply of water, gas, heating, and electricity
- Digital infrastructure (network management, traffic routing)
- Critical infrastructure sectors under EU Directive 2022/2557 (CER Directive)
Area 3: Education and Vocational Training
AI systems that determine access to or outcomes in education:
- Determining access to educational institutions or programs
- Assessing students or evaluating learning outcomes
- Assessing the appropriate level of education for individuals
- Monitoring and detecting prohibited behavior during tests
Area 4: Employment, Workers Management, and Self-Employment
AI systems that affect employment decisions:
- Recruitment and screening (CV sorting, candidate ranking)
- Decisions on hiring, promotion, termination, and task allocation
- Monitoring and evaluating employee performance and behavior
Area 5: Access to Essential Private and Public Services
AI systems that gate access to essential services and benefits:
- Creditworthiness assessment and credit scoring for natural persons
- Risk assessment and pricing in life and health insurance
- Evaluating eligibility for public assistance benefits and services
- Dispatching or prioritizing emergency first response services
- Risk assessment for insurance fraud
Area 6: Law Enforcement
AI systems used by law enforcement authorities for:
- Individual risk assessment (predicting criminal behavior or recidivism)
- Polygraph and similar tools to detect deception
- Evaluating the reliability of evidence in criminal investigations
- Predicting occurrence or reoccurrence of criminal offenses (with objective facts)
- Profiling during detection, investigation, or prosecution
Area 7: Migration, Asylum, and Border Control
AI systems used by competent authorities for:
- Polygraph and similar tools for migration/asylum applicants
- Assessing risks (security, irregular migration, public health)
- Assisting examination of applications for asylum, visa, and residence
- Detecting, recognizing, or identifying persons in migration context
Area 8: Administration of Justice and Democratic Processes
AI systems that assist judicial and democratic functions:
- Assisting judicial authorities in researching and interpreting facts and law
- Applying the law to concrete facts in judicial proceedings
- AI used in alternative dispute resolution
- Influencing the outcome of elections or referendums (not purely organizational/logistical tools)
3. Self-Assessment: Is YOUR System High-Risk?
Determining whether your AI system is high-risk requires a structured decision process. Follow this flowchart logic:
Step 1
Is your AI system embedded in or used as a safety component of a product covered by Annex I EU harmonization legislation?
Step 1a
Does that product require a third-party conformity assessment under its respective legislation?
Step 2
Does your AI system fall within any of the 8 Annex III use-case areas?
Step 3
Does the system perform profiling of natural persons?
Step 4
Does the system pose a significant risk of harm to health, safety, or fundamental rights? Does it materially influence decision-making outcomes?
When in Doubt, Classify Higher
The European Commission and the AI Office have signaled that organizations should err on the side of caution. Classifying a system as high-risk when it is borderline is safer than under-classifying and facing enforcement action. The OECD AI Policy Observatory (2024) reports that 42% of enterprise AI systems initially classified as “limited risk” by their deployers would likely be reclassified as high-risk under strict Annex III interpretation.
4. Provider vs Deployer Obligations
The EU AI Act distinguishes between providers (who develop or commission the AI system) and deployers (who use the AI system under their authority). Their obligations differ significantly:
| Obligation | Provider | Deployer |
|---|---|---|
| Risk management system | Must establish and maintain | Must use system as intended |
| Data governance | Must ensure training data quality | Must ensure input data relevance |
| Technical documentation | Must create and maintain | Must keep provider docs accessible |
| Automatic logging | Must design into the system | Must retain logs (min. 6 months) |
| Transparency | Must provide instructions for use | Must inform affected individuals |
| Human oversight | Must design oversight mechanisms | Must assign competent persons |
| Conformity assessment | Must perform before market placement | Not responsible |
| CE marking & EU declaration | Must affix and declare | Not responsible |
| EU database registration | Must register system | Must register use (certain categories) |
| Post-market monitoring | Must establish monitoring system | Must monitor and report incidents |
| Fundamental rights assessment | Not specified | Must conduct (public bodies + certain private) |
When Deployers Become Providers
A deployer is treated as a provider if they: (a) put their own name or trademark on a high-risk AI system already on the market, (b) make a substantial modification to a high-risk AI system, or (c) modify the intended purpose of an AI system in a way that makes it high-risk. This is critical for enterprises that customize vendor AI systems.
5. Sector-by-Sector Examples
The abstract Annex III categories become concrete when mapped to real-world sector applications. Here are specific examples per sector:
Healthcare
- HIGH-RISK: AI-powered diagnostic imaging (falls under medical device regulation + Annex I). AI for triage prioritization in emergency departments (Area 5: dispatching emergency services). AI for determining treatment eligibility or insurance coverage (Area 5: essential services).
- NOT HIGH-RISK: AI scheduling assistants for hospitals. AI for medical literature search and summarization (unless it applies findings to patient cases). Administrative workflow automation.
Financial Services
- HIGH-RISK: Credit scoring models (Area 5: creditworthiness). Automated loan approval systems. Insurance risk assessment and pricing for life/health (Area 5). Anti-money laundering AI that profiles individuals (Area 6: law enforcement adjacent).
- NOT HIGH-RISK: Fraud detection on transaction patterns (typically). Market data analysis tools. Customer service chatbots (limited risk — transparency obligation only). Portfolio optimization algorithms.
Education
- HIGH-RISK: AI-based exam grading systems (Area 3: assessing students). University admission screening AI (Area 3: access to education). AI proctoring systems that monitor student behavior during exams (Area 3: detecting prohibited behavior).
- NOT HIGH-RISK: Learning management systems. AI-powered tutoring tools (unless they determine grades). Content recommendation engines for educational materials.
Law Enforcement & Public Sector
- HIGH-RISK: Recidivism prediction tools (Area 6). AI-assisted evidence analysis (Area 6). Visa and asylum application assessment (Area 7). AI for border control identification (Area 7).
- PROHIBITED: Social scoring by public authorities. Real-time biometric ID in public spaces (narrow law enforcement exceptions only). Predictive policing based solely on profiling without objective facts.
The Gray Zone
Many AI systems sit in a gray zone between categories. A Deloitte survey (2025) found that 63% of enterprise AI systems require expert legal analysis to determine their classification, and 28% of initial self-assessments are revised after legal review. When in doubt, seek specialized regulatory counsel or use systematic classification tools.
6. The 8 Technical Requirements for High-Risk AI Systems
Once classified as high-risk, an AI system must meet 8 mandatory technical requirements before it can be placed on the EU market or put into service:
Risk Management System (Article 9)
A continuous, iterative risk management system that identifies, estimates, evaluates, and mitigates risks throughout the entire AI lifecycle. Must produce documented, auditable results with residual risks judged acceptable.
Art. 9Data and Data Governance (Article 10)
Training, validation, and testing datasets must meet quality criteria: relevance, representativeness, accuracy, and completeness. Appropriate statistical properties for the intended purpose. Bias examination and mitigation measures.
Art. 10Technical Documentation (Article 11)
Comprehensive documentation drawn up before the system is placed on the market, kept up to date. Must contain information enabling assessment of compliance. Annex IV specifies the required contents in detail.
Art. 11Record-Keeping and Logging (Article 12)
Automatic recording of events (logs) throughout the system's lifetime. Logs must be adequate to enable tracing of system operation, identify risk situations, and facilitate post-market monitoring. Minimum retention: 6 months.
Art. 12Transparency and Information to Deployers (Article 13)
Systems must be designed to be sufficiently transparent for deployers to interpret outputs appropriately. Instructions for use must include: provider identity, system characteristics, performance metrics, known limitations, human oversight measures, and expected lifetime.
Art. 13Human Oversight (Article 14)
Systems must be designed to allow effective human oversight during use. Persons assigned oversight must be able to fully understand the system, correctly interpret outputs, decide not to use the system, and override or reverse outputs.
Art. 14Accuracy, Robustness, and Cybersecurity (Article 15)
Systems must achieve appropriate levels of accuracy for their intended purpose. Must be resilient to errors, faults, and inconsistencies. Technical solutions for cybersecurity threats including adversarial data manipulation (data poisoning, model evasion).
Art. 15Quality Management System (Article 17)
Providers must implement a quality management system including: compliance strategy, design and development techniques, testing and validation procedures, data management protocols, risk management integration, post-market monitoring, incident reporting, and communication with authorities.
Art. 17These 8 requirements are cumulative — high-risk AI providers must satisfy all of them. A European Parliament study (2025) estimated the average cost of achieving full compliance for a single high-risk AI system at €200,000–€400,000, with ongoing annual maintenance costs of €50,000–€150,000.
7. Conformity Assessment Pathways
Before a high-risk AI system can be placed on the EU market, it must undergo a conformity assessment to demonstrate compliance with all applicable requirements. The EU AI Act provides two assessment pathways:
Self-Assessment (Annex VI)
The provider conducts an internal conformity assessment based on their quality management system and technical documentation.
Applies to: Most Annex III high-risk AI systems except biometric identification systems intended for law enforcement.
Third-Party Assessment (Annex VII)
An independent notified body audits the AI system against all requirements. Mandatory for certain categories.
Applies to: Biometric identification AI for law enforcement, and Pathway 1 systems where product legislation requires third-party assessment.
After successful conformity assessment, providers must: affix the CE marking, draw up an EU Declaration of Conformity, and register the system in the EU database maintained by the European Commission. For a detailed walkthrough, see our EU AI Act Compliance Guide.
8. Exceptions and Reclassification
The EU AI Act includes a narrowly-defined exception mechanism under Article 6(3) that allows certain Annex III systems to be reclassified as non-high-risk:
An AI system listed in Annex III is not considered high-risk if it does not pose a significant risk of harm to the health, safety, or fundamental rights of natural persons, including by not materially influencing the outcome of decision-making. Specifically, a system qualifies for this exception if it:
- Performs a narrow procedural task (e.g., converting unstructured data to structured format)
- Is intended to improve the result of a previously completed human activity
- Detects decision-making patterns without replacing or influencing prior human assessment
- Performs a preparatory task for an assessment relevant to the use cases in Annex III
Critical Exception: Profiling
The Article 6(3) exception does NOT apply to any AI system that performs profiling of natural persons, as defined in GDPR Article 4(4). Any AI system that profiles individuals and falls within Annex III is automatically high-risk with no exception possible. This includes automated processing to evaluate personal aspects: work performance, economic situation, health, preferences, interests, reliability, behavior, location, or movements.
Providers who claim this exception must document their assessment and notify the relevant national authority before placing the system on the market. The authority may disagree and require high-risk classification.
9. Frequently Asked Questions
What makes an AI system high-risk under the EU AI Act?▾
What are the 8 technical requirements for high-risk AI systems?▾
What is the difference between a provider and deployer?▾
Can an AI system be reclassified from high-risk?▾
When do high-risk AI system requirements take effect?▾
Related Articles
The Complete EU AI Act Compliance Guide
The definitive guide to EU AI Act compliance: timelines, classifications, and roadmap
AI Risk Assessment Framework
95 friction points across 8 organizational layers for systematic AI risk assessment
GPAI Compliance Guide
General Purpose AI obligations under the EU AI Act
Graph Intelligence for Compliance
Why TAMR+ outperforms vector search for regulatory AI
