1. Article 6(1): AI Systems as Safety Components of Annex I Products
Article 6(1) classifies an AI system as high-risk when it is used as a safety component of a product covered by Union harmonisation legislation listed in Annex I, and that product is required to undergo a third-party conformity assessment under that legislation.
The Annex I product list covers nine legislative frameworks: machinery regulation, medical devices, in vitro diagnostic medical devices, radio equipment, civil aviation, marine equipment, railway interoperability, motor vehicles, and agricultural machinery. If an AI system is embedded in — or controls a safety-relevant function of — a product subject to one of these frameworks, it is automatically high-risk under Article 6(1).
Medical Devices (MDR 2017/745)
AI-powered diagnostic imaging algorithms that flag anomalies for clinical review — these are safety components of the medical device.
Machinery Regulation (2023/1230)
AI control systems that manage safety-critical functions such as emergency stop, load monitoring, or collision avoidance in industrial robots.
Civil Aviation (EASA regulations)
AI systems used in flight management, autopilot assist, or ground proximity warning that fall within EASA-certified avionics.
Motor Vehicles (Reg 2018/858)
AI vision systems for autonomous emergency braking or lane-keeping assistance that are certified under vehicle type approval.
Key Distinction: Not every AI system inside an Annex I product is high-risk — only those that serve as a safety component. An AI chatbot embedded in a medical device's patient portal for appointment scheduling is not a safety component and does not trigger Article 6(1). The assessment requires functional analysis of the AI's role within the product, not merely its physical or software co-location.
2. Article 6(2): AI Systems Listed in Annex III
Article 6(2) classifies AI systems as high-risk when they fall within one of the eight categories defined in Annex III. Unlike Article 6(1), which depends on integration into a regulated product, Article 6(2) classification is driven by the domain of use and function performed.
| Annex III Category | Examples of High-Risk AI Systems | Sector Impact |
|---|---|---|
| 1. Biometric ID & Categorisation | Remote biometric identification, emotion recognition, biometric categorisation | Law enforcement, retail, HR |
| 2. Critical Infrastructure | AI managing digital or physical critical infrastructure components | Energy, water, transport, finance |
| 3. Education & Vocational Training | AI systems for student evaluation, proctoring, educational path allocation | EdTech, universities |
| 4. Employment & Worker Management | AI for CV screening, promotion decisions, task allocation, monitoring | HR, staffing, workforce platforms |
| 5. Essential Private/Public Services | Credit scoring, insurance risk assessment, benefits eligibility, emergency dispatch | FinTech, insurance, public sector |
| 6. Law Enforcement | Predictive risk assessment, evidence evaluation, lie detection | Police, judiciary, corrections |
| 7. Migration & Asylum | Document authenticity verification, risk assessment for border control | Border agencies, immigration services |
| 8. Administration of Justice | AI assisting judges in case research, sentencing recommendations | Courts, legal services |
The Annex III list is not static. Article 7 empowers the European Commission to update it through delegated acts, adding new high-risk categories when AI in a new domain is found to pose significant risks to health, safety, or fundamental rights. Organizations in sectors adjacent to the current eight categories should monitor Commission guidance and delegated act consultations.
3. The Two-Condition Test for Article 6(2)
Inclusion in an Annex III category does not automatically make every AI system in that domain high-risk. Article 6(2) includes a clarifying carve-out: if an AI system does not pose a significant risk of harm and does not materially influence decision outcomes, a provider may document a reasoned conclusion that the system is not high-risk. In practice, this is operationalised as a two-condition test.
Condition 1: Standalone Operation
The AI system must be designed or used to operate independently — not merely as a supplementary tool embedded in a larger human workflow where the AI output is one input among many and a qualified human makes the final determination without reliance on the AI.
Fails condition if: AI provides a recommendation that a human routinely validates against multiple independent data sources with documented discretion.
Condition 2: Specific Consequential Purpose
The AI system must perform a function that materially influences a consequential outcome for natural persons — employment, credit, education access, legal status, healthcare, or fundamental rights. Informational or analytical tools that do not influence a binding or quasi-binding decision may fall below this threshold.
Fails condition if: AI generates market research summaries used internally by analysts — no binding decision affecting individuals flows from the output.
Practical Implication: Both conditions must be satisfied simultaneously for a system to qualify as high-risk under Article 6(2). Providers who believe their system falls within Annex III but fails one or both conditions must document their analysis formally. Undocumented assumptions about non-high-risk status are the most common classification mistake and will not withstand regulatory scrutiny.
4. Article 6(3): GPAI Models with Systemic Risk — A Separate Classification Pathway
Article 6(3) establishes a distinct classification pathway for general-purpose AI (GPAI) models with systemic risk. Unlike Article 6(1) and 6(2), which classify AI systems based on application context, Article 6(3) classifies GPAI models based on their scale and systemic potential.
A GPAI model is presumed to have systemic risk if it was trained using a total compute exceeding 1025 floating-point operations (FLOPs). As of mid-2026, models in this category include GPT-4 class systems, Gemini Ultra, Claude 3 Opus, and Llama 3 405B, though the Commission may designate additional models irrespective of the FLOPs threshold where evidence of systemic risk is identified.
Separation from Use-Case Classification
A GPAI model with systemic risk does not need to be deployed in an Annex III use case to carry obligations. The obligations under Articles 55–56 apply to the model itself, regardless of how deployers use it downstream.
Cumulative Obligations
Providers of systemic-risk GPAI models carry both the general GPAI obligations (Article 53: documentation, training data transparency, copyright policies) and the additional systemic risk obligations (Article 55: adversarial testing, incident reporting, cybersecurity measures).
Downstream Impact on Deployers
Organizations deploying systemic-risk GPAI models via API must understand that the model provider's obligations do not transfer to them — but deployers have their own Article 26 obligations including post-market monitoring, incident reporting, and Article 50 transparency for end users.
5. Classification Flowchart: 5 Decision Nodes
The following five-node decision process provides a structured path from any AI system to its correct EU AI Act classification. Apply the nodes sequentially — a YES result at any node determines the classification and compliance track.
Node 1: Prohibited Practice?
Does the AI system deploy subliminal manipulation, social scoring, real-time biometric ID in public spaces, or predictive policing based solely on profiling?
YES: PROHIBITED — cease operation immediately. Article 5 violation.
NO: Proceed to Node 2.
Node 2: GPAI Model Above 10²⁵ FLOPs?
Is this a general-purpose AI model trained with compute exceeding 10²⁵ FLOPs, or designated by the Commission as systemic risk?
YES: SYSTEMIC RISK GPAI — obligations under Articles 53 + 55 apply. Article 6(3) pathway.
NO: Proceed to Node 3.
Node 3: Annex I Safety Component?
Is this AI system a safety component of a product listed in Annex I (machinery, medical devices, vehicles, aviation, etc.) that requires third-party conformity assessment?
YES: HIGH-RISK — Article 6(1). Chapter III obligations apply including Article 43 conformity assessment.
NO: Proceed to Node 4.
Node 4: Annex III Use Case?
Does this AI system perform a function within one of the eight Annex III categories (biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, justice)?
YES: Proceed to Node 5 (two-condition test).
NO: LIMITED RISK or MINIMAL RISK — Article 50 transparency obligations may still apply for certain interaction types.
Node 5: Two-Condition Test
Does the system operate standalone AND materially influence a consequential decision for natural persons?
YES: HIGH-RISK — Article 6(2). Full Chapter III obligations including conformity assessment, technical documentation, and post-market monitoring.
NO: Document your reasoning. System may be non-high-risk within Annex III. File justification with national competent authority if requested.
6. Borderline Cases: When to Consult the National Competent Authority
EU AI Act classification is not always binary. Borderline cases arise when an AI system falls in the ambiguous space between high-risk and limited-risk, particularly for Annex III use cases where the two-condition test is genuinely uncertain. National competent authorities (NCAs) — designated market surveillance authorities in each member state — have the power to issue binding interpretations on classification questions.
Employment AI with Human Override
A CV screening tool that ranks candidates but where HR reviewers independently evaluate all shortlisted candidates without relying on the rank order. Two-condition test outcome is genuinely ambiguous — consult NCA if deploying at scale above 500 candidates per month.
Credit Scoring for Non-Binding Offers
An AI generating pre-qualification scores for credit card offers that are promotional, not binding, and subject to full underwriting before credit is extended. Article 6(2) category 5 applies, but the materially-influences-outcome condition is disputed.
EdTech Adaptive Learning Pathways
An AI that recommends learning modules but does not assess students for grading, progression, or qualification award. Falls within Annex III category 3 domain but may not meet the two-condition test depending on deployment context.
Multi-Jurisdiction Deployment
AI systems deployed simultaneously in multiple EU member states where NCAs have issued divergent preliminary guidance. Seek formal clarification from the NCA in your primary establishment member state and document responses.
When consulting an NCA, prepare a written classification analysis that includes: the AI system's functional description, the Annex III category under consideration, your two-condition test analysis, any comparable systems the Commission has previously classified, and your proposed compliance track. NCAs in the Netherlands (RDI), Germany (BNetzA), and France (CNIL) have all published preliminary classification guidance.
7. Impact of Wrong Classification: Article 99 Penalties
The financial and operational consequences of misclassification are severe. Article 99 establishes a tiered penalty structure that is explicitly calibrated to the severity of the compliance failure.
| Violation Type | Article | Maximum Fine | % of Global Turnover |
|---|---|---|---|
| Prohibited practice violations | Art. 5 | €35,000,000 | 7% |
| High-risk system obligation violations (failure to comply after misclassification) | Art. 99(3) | €15,000,000 | 3% |
| Supplying incorrect information to authorities | Art. 99(4) | €7,500,000 | 1.5% |
| SME / startup reduced rate | Art. 99(6) | Lower of the above | Proportionate |
Beyond financial penalties, NCAs have the authority to order market withdrawal, suspend deployment, and require public disclosure of compliance failures. For regulated-sector deployers — banks, insurers, hospitals, public authorities — NCA action under the EU AI Act can trigger parallel investigations under DORA, GDPR, or sector-specific financial regulation.
Proactive Disclosure Mitigates Risk: Organizations that proactively disclose classification uncertainty to their NCA, document their analysis, and commit to a compliance roadmap are treated significantly more favorably than those where misclassification is discovered during enforcement. The EU AI Act's enforcement framework explicitly rewards good-faith engagement.
8. TraceGov.ai Automated Classification Tool
Manual Article 6 classification is time-consuming, legally nuanced, and prone to inconsistency when applied across a portfolio of AI systems. TraceGov.ai's automated classification engine, powered by the TAMR+ methodology — which achieves 74% accuracy on the EU-RegQA benchmark versus the 38.5% industry baseline — addresses this problem at enterprise scale.
AI System Inventory Ingestion
Connect TraceGov.ai to your AI system registry, CMDB, or procurement records. The platform ingests system descriptions, use case metadata, and deployment contexts automatically.
Five-Node Classification Engine
Each system is evaluated against all five classification nodes in sequence. The TAMR+ engine applies regulatory text interpretation trained on the EU AI Act, Commission guidance, and NCA published positions to resolve ambiguous cases.
Borderline Case Flagging
Systems where the two-condition test is genuinely uncertain are flagged with a confidence score. Legal teams receive structured analysis packages — not raw model outputs — for human review before final classification is recorded.
Audit Trail Generation
Every classification decision produces a time-stamped audit log with regulatory citations, reasoning chain, and the version of the TAMR+ model used. This log is formatted for NCA submission if classification is ever challenged.
TRACE Score Integration
Classified high-risk systems automatically flow into the TRACE compliance scoring module, which tracks obligation fulfilment across Articles 9–15, 43, and 72 — giving compliance teams a real-time readiness dashboard for the August 2026 deadline.
