EU AI Act Compliance12 min read

EU AI Act Article 6: High-Risk AI Classification Explained

Article 6 of the EU AI Act defines which AI systems are legally classified as high-risk — and the consequences of that classification are substantial. Wrong classification triggers Article 99 penalties of up to €15 million or 3% of global turnover. This guide decodes Article 6(1), 6(2), and 6(3), provides a five-node classification flowchart, addresses borderline cases, and explains how TraceGov.ai automates the process.

··Updated February 10, 2026

1. Article 6(1): AI Systems as Safety Components of Annex I Products

Article 6(1) classifies an AI system as high-risk when it is used as a safety component of a product covered by Union harmonisation legislation listed in Annex I, and that product is required to undergo a third-party conformity assessment under that legislation.

The Annex I product list covers nine legislative frameworks: machinery regulation, medical devices, in vitro diagnostic medical devices, radio equipment, civil aviation, marine equipment, railway interoperability, motor vehicles, and agricultural machinery. If an AI system is embedded in — or controls a safety-relevant function of — a product subject to one of these frameworks, it is automatically high-risk under Article 6(1).

Medical Devices (MDR 2017/745)

AI-powered diagnostic imaging algorithms that flag anomalies for clinical review — these are safety components of the medical device.

Machinery Regulation (2023/1230)

AI control systems that manage safety-critical functions such as emergency stop, load monitoring, or collision avoidance in industrial robots.

Civil Aviation (EASA regulations)

AI systems used in flight management, autopilot assist, or ground proximity warning that fall within EASA-certified avionics.

Motor Vehicles (Reg 2018/858)

AI vision systems for autonomous emergency braking or lane-keeping assistance that are certified under vehicle type approval.

Key Distinction: Not every AI system inside an Annex I product is high-risk — only those that serve as a safety component. An AI chatbot embedded in a medical device's patient portal for appointment scheduling is not a safety component and does not trigger Article 6(1). The assessment requires functional analysis of the AI's role within the product, not merely its physical or software co-location.

2. Article 6(2): AI Systems Listed in Annex III

Article 6(2) classifies AI systems as high-risk when they fall within one of the eight categories defined in Annex III. Unlike Article 6(1), which depends on integration into a regulated product, Article 6(2) classification is driven by the domain of use and function performed.

Annex III CategoryExamples of High-Risk AI SystemsSector Impact
1. Biometric ID & CategorisationRemote biometric identification, emotion recognition, biometric categorisationLaw enforcement, retail, HR
2. Critical InfrastructureAI managing digital or physical critical infrastructure componentsEnergy, water, transport, finance
3. Education & Vocational TrainingAI systems for student evaluation, proctoring, educational path allocationEdTech, universities
4. Employment & Worker ManagementAI for CV screening, promotion decisions, task allocation, monitoringHR, staffing, workforce platforms
5. Essential Private/Public ServicesCredit scoring, insurance risk assessment, benefits eligibility, emergency dispatchFinTech, insurance, public sector
6. Law EnforcementPredictive risk assessment, evidence evaluation, lie detectionPolice, judiciary, corrections
7. Migration & AsylumDocument authenticity verification, risk assessment for border controlBorder agencies, immigration services
8. Administration of JusticeAI assisting judges in case research, sentencing recommendationsCourts, legal services

The Annex III list is not static. Article 7 empowers the European Commission to update it through delegated acts, adding new high-risk categories when AI in a new domain is found to pose significant risks to health, safety, or fundamental rights. Organizations in sectors adjacent to the current eight categories should monitor Commission guidance and delegated act consultations.

3. The Two-Condition Test for Article 6(2)

Inclusion in an Annex III category does not automatically make every AI system in that domain high-risk. Article 6(2) includes a clarifying carve-out: if an AI system does not pose a significant risk of harm and does not materially influence decision outcomes, a provider may document a reasoned conclusion that the system is not high-risk. In practice, this is operationalised as a two-condition test.

Condition 1: Standalone Operation

The AI system must be designed or used to operate independently — not merely as a supplementary tool embedded in a larger human workflow where the AI output is one input among many and a qualified human makes the final determination without reliance on the AI.

Fails condition if: AI provides a recommendation that a human routinely validates against multiple independent data sources with documented discretion.

Condition 2: Specific Consequential Purpose

The AI system must perform a function that materially influences a consequential outcome for natural persons — employment, credit, education access, legal status, healthcare, or fundamental rights. Informational or analytical tools that do not influence a binding or quasi-binding decision may fall below this threshold.

Fails condition if: AI generates market research summaries used internally by analysts — no binding decision affecting individuals flows from the output.

Practical Implication: Both conditions must be satisfied simultaneously for a system to qualify as high-risk under Article 6(2). Providers who believe their system falls within Annex III but fails one or both conditions must document their analysis formally. Undocumented assumptions about non-high-risk status are the most common classification mistake and will not withstand regulatory scrutiny.

4. Article 6(3): GPAI Models with Systemic Risk — A Separate Classification Pathway

Article 6(3) establishes a distinct classification pathway for general-purpose AI (GPAI) models with systemic risk. Unlike Article 6(1) and 6(2), which classify AI systems based on application context, Article 6(3) classifies GPAI models based on their scale and systemic potential.

A GPAI model is presumed to have systemic risk if it was trained using a total compute exceeding 1025 floating-point operations (FLOPs). As of mid-2026, models in this category include GPT-4 class systems, Gemini Ultra, Claude 3 Opus, and Llama 3 405B, though the Commission may designate additional models irrespective of the FLOPs threshold where evidence of systemic risk is identified.

Separation from Use-Case Classification

A GPAI model with systemic risk does not need to be deployed in an Annex III use case to carry obligations. The obligations under Articles 55–56 apply to the model itself, regardless of how deployers use it downstream.

Cumulative Obligations

Providers of systemic-risk GPAI models carry both the general GPAI obligations (Article 53: documentation, training data transparency, copyright policies) and the additional systemic risk obligations (Article 55: adversarial testing, incident reporting, cybersecurity measures).

Downstream Impact on Deployers

Organizations deploying systemic-risk GPAI models via API must understand that the model provider's obligations do not transfer to them — but deployers have their own Article 26 obligations including post-market monitoring, incident reporting, and Article 50 transparency for end users.

5. Classification Flowchart: 5 Decision Nodes

The following five-node decision process provides a structured path from any AI system to its correct EU AI Act classification. Apply the nodes sequentially — a YES result at any node determines the classification and compliance track.

1

Node 1: Prohibited Practice?

Does the AI system deploy subliminal manipulation, social scoring, real-time biometric ID in public spaces, or predictive policing based solely on profiling?

YES: PROHIBITED — cease operation immediately. Article 5 violation.

NO: Proceed to Node 2.

2

Node 2: GPAI Model Above 10²⁵ FLOPs?

Is this a general-purpose AI model trained with compute exceeding 10²⁵ FLOPs, or designated by the Commission as systemic risk?

YES: SYSTEMIC RISK GPAI — obligations under Articles 53 + 55 apply. Article 6(3) pathway.

NO: Proceed to Node 3.

3

Node 3: Annex I Safety Component?

Is this AI system a safety component of a product listed in Annex I (machinery, medical devices, vehicles, aviation, etc.) that requires third-party conformity assessment?

YES: HIGH-RISK — Article 6(1). Chapter III obligations apply including Article 43 conformity assessment.

NO: Proceed to Node 4.

4

Node 4: Annex III Use Case?

Does this AI system perform a function within one of the eight Annex III categories (biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, justice)?

YES: Proceed to Node 5 (two-condition test).

NO: LIMITED RISK or MINIMAL RISK — Article 50 transparency obligations may still apply for certain interaction types.

5

Node 5: Two-Condition Test

Does the system operate standalone AND materially influence a consequential decision for natural persons?

YES: HIGH-RISK — Article 6(2). Full Chapter III obligations including conformity assessment, technical documentation, and post-market monitoring.

NO: Document your reasoning. System may be non-high-risk within Annex III. File justification with national competent authority if requested.

6. Borderline Cases: When to Consult the National Competent Authority

EU AI Act classification is not always binary. Borderline cases arise when an AI system falls in the ambiguous space between high-risk and limited-risk, particularly for Annex III use cases where the two-condition test is genuinely uncertain. National competent authorities (NCAs) — designated market surveillance authorities in each member state — have the power to issue binding interpretations on classification questions.

Employment AI with Human Override

A CV screening tool that ranks candidates but where HR reviewers independently evaluate all shortlisted candidates without relying on the rank order. Two-condition test outcome is genuinely ambiguous — consult NCA if deploying at scale above 500 candidates per month.

Credit Scoring for Non-Binding Offers

An AI generating pre-qualification scores for credit card offers that are promotional, not binding, and subject to full underwriting before credit is extended. Article 6(2) category 5 applies, but the materially-influences-outcome condition is disputed.

EdTech Adaptive Learning Pathways

An AI that recommends learning modules but does not assess students for grading, progression, or qualification award. Falls within Annex III category 3 domain but may not meet the two-condition test depending on deployment context.

Multi-Jurisdiction Deployment

AI systems deployed simultaneously in multiple EU member states where NCAs have issued divergent preliminary guidance. Seek formal clarification from the NCA in your primary establishment member state and document responses.

When consulting an NCA, prepare a written classification analysis that includes: the AI system's functional description, the Annex III category under consideration, your two-condition test analysis, any comparable systems the Commission has previously classified, and your proposed compliance track. NCAs in the Netherlands (RDI), Germany (BNetzA), and France (CNIL) have all published preliminary classification guidance.

7. Impact of Wrong Classification: Article 99 Penalties

The financial and operational consequences of misclassification are severe. Article 99 establishes a tiered penalty structure that is explicitly calibrated to the severity of the compliance failure.

Violation TypeArticleMaximum Fine% of Global Turnover
Prohibited practice violationsArt. 5€35,000,0007%
High-risk system obligation violations (failure to comply after misclassification)Art. 99(3)€15,000,0003%
Supplying incorrect information to authoritiesArt. 99(4)€7,500,0001.5%
SME / startup reduced rateArt. 99(6)Lower of the aboveProportionate

Beyond financial penalties, NCAs have the authority to order market withdrawal, suspend deployment, and require public disclosure of compliance failures. For regulated-sector deployers — banks, insurers, hospitals, public authorities — NCA action under the EU AI Act can trigger parallel investigations under DORA, GDPR, or sector-specific financial regulation.

Proactive Disclosure Mitigates Risk: Organizations that proactively disclose classification uncertainty to their NCA, document their analysis, and commit to a compliance roadmap are treated significantly more favorably than those where misclassification is discovered during enforcement. The EU AI Act's enforcement framework explicitly rewards good-faith engagement.

8. TraceGov.ai Automated Classification Tool

Manual Article 6 classification is time-consuming, legally nuanced, and prone to inconsistency when applied across a portfolio of AI systems. TraceGov.ai's automated classification engine, powered by the TAMR+ methodology — which achieves 74% accuracy on the EU-RegQA benchmark versus the 38.5% industry baseline — addresses this problem at enterprise scale.

AI System Inventory Ingestion

Connect TraceGov.ai to your AI system registry, CMDB, or procurement records. The platform ingests system descriptions, use case metadata, and deployment contexts automatically.

Five-Node Classification Engine

Each system is evaluated against all five classification nodes in sequence. The TAMR+ engine applies regulatory text interpretation trained on the EU AI Act, Commission guidance, and NCA published positions to resolve ambiguous cases.

Borderline Case Flagging

Systems where the two-condition test is genuinely uncertain are flagged with a confidence score. Legal teams receive structured analysis packages — not raw model outputs — for human review before final classification is recorded.

Audit Trail Generation

Every classification decision produces a time-stamped audit log with regulatory citations, reasoning chain, and the version of the TAMR+ model used. This log is formatted for NCA submission if classification is ever challenged.

TRACE Score Integration

Classified high-risk systems automatically flow into the TRACE compliance scoring module, which tracks obligation fulfilment across Articles 9–15, 43, and 72 — giving compliance teams a real-time readiness dashboard for the August 2026 deadline.

9. Frequently Asked Questions About EU AI Act Article 6

What is the difference between Article 6(1) and Article 6(2) of the EU AI Act?
Article 6(1) covers AI systems that are safety components of regulated products listed in Annex I — such as machinery, medical devices, and vehicles. These systems are high-risk because of their integration into safety-critical products. Article 6(2) covers AI systems that perform specific functions in sensitive domains listed in Annex III — biometrics, employment, credit, justice, and more — regardless of the product they are embedded in. Classification under 6(1) is determined by product type; classification under 6(2) is determined by use case and function.
What is the two-condition test under Article 6(2)?
To qualify as high-risk under Article 6(2), an AI system must satisfy two cumulative conditions: (1) it must operate as a standalone system that materially influences a consequential decision; and (2) it must perform a function within one of the eight Annex III categories. Both conditions must be met simultaneously. Systems that fall within an Annex III domain but serve only as informational tools without influencing binding decisions may qualify for the non-high-risk carve-out, provided the provider formally documents the analysis.
How does Article 6(3) differ from Article 6(1) and 6(2)?
Article 6(3) applies to GPAI models with systemic risk — models exceeding 10²⁵ FLOPs training compute or designated by the Commission. Unlike Article 6(1) and 6(2), which classify AI systems by use case or product integration, Article 6(3) classifies GPAI models by scale and potential societal impact. The obligations are different too: systemic-risk GPAI providers face adversarial testing, incident reporting to the EU AI Office, and cybersecurity requirements under Articles 55–56.
What happens if a company misclassifies an AI system under the EU AI Act?
Misclassification — treating a high-risk system as non-high-risk — exposes providers to Article 99 penalties of up to €15 million or 3% of global annual turnover. NCAs can also order market withdrawal and mandatory corrective action. In regulated sectors, misclassification can trigger parallel regulatory investigations under DORA, GDPR, or sector-specific law. The enforcement framework explicitly rewards proactive disclosure and documented compliance efforts.
Can a provider self-declare that an Annex III AI system is not high-risk?
Yes, under specific conditions. Article 6(2) includes a carve-out for systems that fall within an Annex III category but do not materially influence consequential outcomes. The provider must formally document their reasoning, be prepared to submit it to the NCA if requested, and monitor Commission guidance updates. This self-declaration must be technically substantiated — undocumented assumptions will not withstand regulatory scrutiny. Legal counsel review before relying on this carve-out is strongly recommended.

Explore the EU AI Act Compliance Cluster

Related Topics

Harish Kumar

Harish Kumar

Founder & CEO, Quantamix Solutions B.V.

18+ years building AI governance frameworks across regulated industries. Former ING Bank (Economic Capital Modeling), Rabobank (IFRS9 Engine, €400B+ portfolio), Philips (200-member GenAI Champions Community), Amazon Ring, Deutsche Bank, and Reserve Bank of India. FRM, PMP, GCP certified. Patent holder (EP26162901.8). Published researcher (SSRN 6359818). Creator of TAMR+ methodology (74% vs 38.5% on EU-RegQA benchmark).