Regulatory AI14 min read

EU AI Act 2025–2027: Every Key Date, Deadline & Obligation Explained

The EU AI Act (Regulation 2024/1689) rolls out in phases from August 2024 through August 2027. Each phase triggers new obligations for different categories of AI systems. This article provides a comprehensive, month-by-month timeline with exactly what your organization must do at each stage — and the consequences of missing a deadline.

··Updated March 23, 2026

1. Regulation Overview & Legislative History

The EU AI Act (Regulation (EU) 2024/1689) is the world's first comprehensive legal framework for artificial intelligence. The European Commission published its initial proposal on 21 April 2021. After extensive negotiation between the European Parliament and Council, a political agreement was reached on 8 December 2023. The regulation was formally adopted on 13 June 2024 and published in the Official Journal of the European Union on 12 July 2024.

The Act entered into force on 1 August 2024, beginning a phased implementation window that extends through 2 August 2027. This 36-month rollout was deliberately designed to give organizations time to adapt — but each phase brings legally binding obligations with immediate enforcement capability.

The regulation covers approximately 180 recitals and 113 articles across 13 titles, supplemented by 13 annexes. It establishes obligations for four categories of actors: providers (developers), deployers (users), importers, and distributors of AI systems. The scope is extraterritorial, meaning it applies to non-EU organizations whose AI systems are placed on the EU market or whose AI output is used within the EU.

Key Numbers

  • 27 EU member states must transpose and enforce the regulation
  • ~10,000 high-risk AI systems estimated to require conformity assessment (European Commission Impact Assessment, 2021)
  • EUR 35 million or 7% maximum fine for prohibited practice violations
  • 4 phased deadlines between February 2025 and August 2027

2. Master Timeline Table

The following table provides the definitive schedule of EU AI Act enforcement dates. Each row identifies the date, the regulatory provision triggered, the affected actors, and the penalty exposure.

DatePhaseWhat Becomes EnforceableAffected ActorsMax Penalty
1 Aug 2024Entry into ForceRegulation published, definitions active, AI literacy requirement begins (Art. 4)All actorsN/A
2 Feb 2025Prohibited Practices8 banned AI applications (Art. 5): social scoring, manipulative AI, emotion recognition in workplace/school, untargeted biometric scraping, real-time remote biometric ID (with exceptions)All providers & deployers€35M / 7%
2 May 2025Codes of PracticeGPAI codes of practice finalized by AI Office; voluntary compliance before mandatory deadlineGPAI providersN/A (voluntary)
2 Aug 2025GPAI & GovernanceGeneral-purpose AI obligations (Chapter V), AI Office operational, national competent authorities designated, confidentiality provisions, penalties framework activeGPAI providers, Member States€15M / 3%
2 Feb 2026Notified BodiesMember states notify the Commission of designation of notified bodies for third-party conformity assessmentsMember States, Notified BodiesN/A
2 Aug 2026High-Risk AI (Annex III)Full requirements for Annex III high-risk AI systems: risk management (Art. 9), data governance (Art. 10), technical documentation (Art. 11), record-keeping (Art. 12), transparency (Art. 13), human oversight (Art. 14), accuracy & robustness (Art. 15), conformity assessments, CE marking, EU database registrationProviders, Deployers, Importers€15M / 3%
2 Aug 2026Regulatory SandboxesAt least one AI regulatory sandbox per member state must be operational (Art. 57)Member StatesN/A
2 Aug 2027Full EnforcementAll remaining provisions: Annex I high-risk AI systems (product safety legislation), transparency obligations for limited-risk systems, right to explanation, post-market monitoring, serious incident reportingAll actorsFull penalty regime

Critical Observation

As of March 2026, the prohibited practices phase is already active. The GPAI deadline is less than 5 months away. Organizations that have not yet inventoried their AI systems are operating with significant regulatory exposure.

3. Phase 1: Prohibited Practices (2 February 2025)

The first enforcement milestone applied from 2 February 2025, banning eight categories of AI systems deemed to pose unacceptable risk to fundamental rights. This was the shortest transition period in the regulation — just 6 months after entry into force — reflecting the severity of the harms these practices can cause.

What Was Banned

Article 5 of the AI Act prohibits:

  1. Subliminal manipulation — AI techniques that deploy subliminal components beyond a person's consciousness to materially distort behavior, causing significant harm
  2. Exploitation of vulnerabilities — AI that exploits age, disability, or social/economic situation to distort behavior causing significant harm
  3. Social scoring by public authorities — AI-based evaluation of natural persons based on social behavior leading to detrimental treatment disproportionate to context
  4. Individual criminal risk assessment — AI predicting criminal offenses solely based on profiling or personality traits (exceptions for systems augmenting human assessment based on objective facts)
  5. Untargeted facial recognition scraping — Creating facial recognition databases through untargeted scraping from the internet or CCTV
  6. Emotion recognition in workplace/education — AI inferring emotions in workplace and educational settings (except for medical/safety reasons)
  7. Biometric categorization by sensitive attributes — Categorizing persons based on biometric data to infer race, political opinions, trade union membership, religious beliefs, sex life, or sexual orientation
  8. Real-time remote biometric identification in public spaces — For law enforcement purposes, except under strict exceptions (searching for missing children, preventing imminent terrorist threat, locating suspects of serious crimes)

What Companies Should Have Done

  • Audited all deployed AI systems against the 8 prohibited categories
  • Documented any borderline cases with legal analysis of applicability
  • Decommissioned or modified any AI systems falling within prohibited scope
  • Communicated changes to affected stakeholders (employees, customers, data subjects)
  • Preserved evidence of compliance decisions for potential regulatory inquiry

For detailed coverage of each prohibited practice: Prohibited AI Practices Under the EU AI Act: What's Banned Since February 2025.

4. Phase 2: GPAI Obligations (2 August 2025)

The second major deadline applies to General-Purpose AI (GPAI) models and systems. Chapter V of the AI Act creates a two-tier framework:

TierThresholdObligationsPenalty
All GPAI ModelsAny general-purpose AI modelTechnical documentation, training data summary (sufficiently detailed for copyright compliance), downstream provider information, EU Copyright Directive compliance, designate EU representative€15M / 3%
Systemic Risk GPAI>10^25 FLOPs training compute or Commission designationAll above + model evaluation, adversarial testing, systemic risk tracking, serious incident reporting, cybersecurity protections, energy consumption reporting€35M / 7%

The AI Office (established within the European Commission) is the primary enforcement body for GPAI. The office led the development of the first GPAI Code of Practice, which was finalized in draft form by May 2025. Providers who adopt the code can use it to demonstrate compliance, though the code is not the exclusive means of meeting obligations.

Systemic Risk Classification

The 10^25 FLOPs threshold (approximately 10 septillion floating-point operations) is a computational benchmark. As of early 2026, models from OpenAI (GPT-4 class and above), Google DeepMind (Gemini Ultra), Anthropic (Claude 3 Opus and above), and Meta (Llama 3 400B+) are generally understood to exceed this threshold based on publicly available training compute estimates. The European Commission retains authority to designate additional models as systemic risk based on criteria beyond compute alone, including number of registered users, market reach, and capability evaluations.

For GPAI Providers: Act Now

With less than 5 months until the August 2025 deadline, GPAI providers should be finalizing technical documentation, conducting copyright audits of training data, preparing energy consumption disclosures, and (if systemic risk) planning model evaluations and adversarial testing programs. Read the full GPAI compliance guide →

5. Phase 3: High-Risk AI Systems (2 August 2026)

The most operationally demanding phase applies from 2 August 2026. This is when the full requirements for Annex III high-risk AI systems become enforceable. Annex III covers AI systems used in:

  • Biometrics — Remote biometric identification (non-real-time), biometric categorization, emotion recognition
  • Critical infrastructure — AI managing safety components of road traffic, water, gas, heating, and electricity supply
  • Education & vocational training — AI determining access to education, evaluating learning outcomes, monitoring prohibited behavior during exams
  • Employment & workers management — AI for recruitment, selection, interview evaluation, promotion, termination decisions, task allocation, performance monitoring
  • Essential services — AI assessing creditworthiness, insurance pricing, emergency service dispatch prioritization
  • Law enforcement — AI assessing risk of victimization or offending, polygraph-type tools, evidence evaluation, profiling for crime detection
  • Migration & border control — AI for visa and asylum application assessment, security risk assessment, document authenticity verification
  • Justice & democratic processes — AI assisting judicial authorities in researching and interpreting law, AI influencing election outcomes

Technical Requirements (Articles 8–15)

Providers of high-risk AI systems must implement seven mandatory requirement categories:

RequirementArticleKey Obligations
Risk Management SystemArt. 9Continuous iterative process throughout entire lifecycle; identify, analyze, evaluate, and mitigate risks to health, safety, and fundamental rights
Data GovernanceArt. 10Training, validation, and test data must be relevant, representative, free of errors, and complete; bias detection and mitigation measures
Technical DocumentationArt. 11Detailed documentation per Annex IV; must demonstrate compliance before market placement; kept up-to-date
Record-KeepingArt. 12Automatic logging of events throughout system lifecycle; traceability of AI system functioning; retain logs as appropriate
TransparencyArt. 13Instructions for use enabling deployers to interpret and use output appropriately; declare limitations, intended purpose, level of accuracy
Human OversightArt. 14Design for effective human oversight; humans can understand capabilities and limitations, monitor operation, decide to override or intervene
Accuracy, Robustness & CybersecurityArt. 15Achieve and maintain appropriate levels of accuracy; resilient against errors, faults, and adversarial manipulation; protected against unauthorized access

For the full classification framework: High-Risk AI Systems Classification Guide.

6. Phase 4: Full Enforcement (2 August 2027)

The final enforcement phase on 2 August 2027 brings all remaining provisions into effect. This includes:

  • Annex I high-risk AI systems — AI systems embedded in products governed by existing EU harmonization legislation: medical devices (Regulation 2017/745), machinery (Regulation 2023/1230), toys, radio equipment, in vitro diagnostics, civil aviation, vehicles, marine equipment, rail interoperability, and personal protective equipment. These products already had CE marking requirements; the AI Act adds AI-specific obligations on top.
  • Transparency obligations for limited-risk systems — AI systems interacting with natural persons (chatbots) must disclose they are AI; deep fakes must be labelled; emotion recognition and biometric categorization systems must inform subjects
  • Right to explanation — Affected persons have the right to obtain meaningful explanations of high-risk AI decisions that significantly affect them (Art. 86)
  • Post-market monitoring — Providers must establish post-market monitoring systems proportionate to the nature of the AI system and risk level (Art. 72)
  • Serious incident reporting — Providers must report serious incidents to national authorities within 15 days of becoming aware (Art. 73)

No More Grace Periods After August 2027

Once the full enforcement date passes, the entire EU AI Act is in effect with no remaining transition periods. National market surveillance authorities will have full investigative and corrective powers. Non-compliant AI systems can be recalled from the market. The full three-tier penalty structure applies to all violations.

7. Monthly Preparation Breakdown (2025–2027)

The following schedule provides a practical month-by-month checklist for enterprise compliance teams. This assumes the organization has already completed the prohibited practices audit (Phase 1) and is now preparing for the remaining three phases.

Q2 2025 (April – June 2025)

  • Complete AI system inventory across all business units
  • Classify each system against AI Act risk tiers (prohibited, high-risk, limited, minimal)
  • If operating GPAI models: begin technical documentation and training data audits
  • Appoint AI governance lead or committee
  • Engage legal counsel for interpretation of Annex III applicability

Q3 2025 (July – September 2025)

2 August 2025: GPAI deadline active

  • GPAI providers: finalize and publish technical documentation
  • GPAI providers: implement EU Copyright Directive compliance measures for training data
  • Systemic risk GPAI providers: initiate model evaluations and adversarial testing
  • All organizations: begin high-risk AI system gap analysis against Articles 9–15
  • Identify notified body candidates for third-party conformity assessments

Q4 2025 (October – December 2025)

  • Develop risk management systems for each high-risk AI system (Art. 9)
  • Establish data governance frameworks: bias detection, representativeness audits, error rate tracking (Art. 10)
  • Begin drafting technical documentation per Annex IV specifications
  • Train staff on EU AI Act requirements and internal governance policies

Q1 2026 (January – March 2026)

  • Implement automatic logging and record-keeping systems (Art. 12)
  • Design and validate human oversight mechanisms (Art. 14)
  • Conduct accuracy, robustness, and cybersecurity testing (Art. 15)
  • Prepare instructions for use for deployers (Art. 13)
  • Engage notified body if third-party conformity assessment required

Q2 2026 (April – June 2026)

  • Complete conformity assessment (self-assessment or notified body audit)
  • Prepare EU Declaration of Conformity
  • Affix CE marking to compliant high-risk AI systems
  • Register high-risk AI systems in the EU database
  • Conduct final internal audit against all Articles 8–15 requirements

Q3 2026 (July – September 2026)

2 August 2026: High-risk AI deadline active

  • High-risk AI system requirements enforceable — compliance mandatory
  • Establish post-market monitoring systems
  • Implement serious incident reporting procedures
  • Begin compliance monitoring and drift detection

Q4 2026 – Q2 2027

  • Annex I product safety AI systems: complete requirements alignment
  • Limited-risk transparency obligations: implement AI disclosure mechanisms
  • Prepare right-to-explanation capabilities for high-risk systems (Art. 86)
  • Conduct periodic re-assessment of AI system risk classifications

Q3 2027 (August 2027+)

2 August 2027: Full enforcement — no remaining grace periods

  • All EU AI Act provisions fully enforceable
  • Continuous compliance: monitoring, documentation updates, periodic testing
  • Annual review of AI system inventory and risk classifications
  • Track regulatory updates and European Commission delegated acts

8. SME & Startup Provisions

The EU AI Act includes several provisions specifically designed to reduce the compliance burden on small and medium-sized enterprises (SMEs), micro-enterprises, and startups. These are not deadline extensions but rather structural accommodations:

  • Regulatory sandboxes (Art. 57) — Each member state must establish at least one AI regulatory sandbox by August 2026. SMEs and startups receive priority access. Sandboxes provide a controlled environment for developing and testing AI systems with regulatory guidance.
  • Reduced conformity assessment fees — Notified bodies must set proportionate fees for SMEs and startups conducting conformity assessments. The European Commission has published guidance on what constitutes “proportionate.”
  • SME-specific guidance — The European Commission and AI Office must provide simplified guidance documents and templates designed for SME compliance capacity.
  • Proportionate penalties — Fines for SMEs and startups are capped at the lower of the fixed Euro amount or the turnover percentage. For a company with €5M turnover, the maximum prohibited-practice fine would be €350K (7% of turnover), not €35M.
  • Real-world testing exemptions — SMEs can benefit from simplified conditions for real-world testing of AI systems within regulatory sandboxes (Art. 60).

Important Clarification

SME provisions reduce the burden of compliance — they do not eliminate the obligation. The same deadlines apply to all organizations regardless of size. An SME deploying a high-risk AI system in employment decisions must meet the same Article 9–15 requirements as a large enterprise, just with proportionate fees and penalty caps.

9. Action Checklist by Deadline

Use this checklist to verify your organization's readiness for each enforcement phase:

Phase 1 (Feb 2025)

ACTIVE
  • □Audit all AI systems against 8 prohibited categories
  • □Decommission or modify prohibited systems
  • □Document compliance decisions and retain evidence
  • □Brief leadership on prohibited practice risks
  • □Monitor enforcement actions by national authorities

Phase 2 (Aug 2025)

5 MONTHS
  • □GPAI: Finalize technical documentation
  • □GPAI: Complete training data copyright audit
  • □GPAI (systemic): Model evaluation and adversarial testing program
  • □GPAI (systemic): Energy consumption disclosure prepared
  • □Designate EU representative if non-EU provider

Phase 3 (Aug 2026)

17 MONTHS
  • □Complete risk management system for each high-risk AI (Art. 9)
  • □Data governance framework operational (Art. 10)
  • □Technical documentation per Annex IV (Art. 11)
  • □Automatic logging implemented (Art. 12)
  • □Instructions for use prepared (Art. 13)
  • □Human oversight mechanisms designed and tested (Art. 14)
  • □Accuracy, robustness, cybersecurity validated (Art. 15)
  • □Conformity assessment completed
  • □CE marking affixed, EU database registration done

Phase 4 (Aug 2027)

29 MONTHS
  • □Annex I product-safety AI systems compliant
  • □Limited-risk transparency obligations implemented
  • □Right-to-explanation capability operational (Art. 86)
  • □Post-market monitoring system active (Art. 72)
  • □Serious incident reporting procedure in place (Art. 73)
  • □All staff trained on relevant obligations

10. Frequently Asked Questions

What is the first EU AI Act deadline that has already passed?▾
The first enforcement deadline was 2 February 2025, when prohibited AI practices became illegal. This includes social scoring, manipulative AI, untargeted facial-recognition scraping, emotion recognition in workplaces and schools, and real-time biometric identification in public spaces (with narrow law enforcement exceptions). Violations carry fines up to EUR 35 million or 7% of global annual turnover, whichever is higher.
Are there deadline extensions or transition periods for SMEs?▾
The compliance deadlines are the same for all organizations regardless of size. However, SMEs benefit from structural accommodations: regulatory sandbox priority access, reduced conformity assessment fees, SME-specific guidance documents, and proportionate penalty caps (fines are capped at the lower of the fixed Euro amount or the turnover percentage). A company with EUR 5M turnover faces a maximum prohibited-practice fine of EUR 350K, not EUR 35M.
What happens between August 2026 and August 2027?▾
This 12-month window is the final transition period. By August 2026, high-risk AI system requirements under Annex III are enforceable. During the following year, Annex I high-risk systems (those under existing product safety legislation like the Medical Devices Regulation and Machinery Regulation) must also come into compliance. National market surveillance authorities become fully operational. By 2 August 2027, all provisions are enforceable with no remaining grace periods.
When should companies start EU AI Act compliance preparation?▾
Immediately. The prohibited practices deadline has passed. GPAI obligations apply from August 2025. For high-risk AI systems, the August 2026 deadline requires conformity assessments, technical documentation, risk management systems, and post-market monitoring. Industry benchmarks indicate a full compliance program takes 12-18 months for a mid-size enterprise. Organizations that have not started face a significant risk of non-compliance when high-risk provisions take effect.

Related Articles

Harish Kumar

Harish Kumar

Founder & CEO, Quantamix Solutions B.V.

18+ years in enterprise AI across Amazon Ring, Philips (GenAI Champions), ING Bank, Rabobank (€400B+ loan portfolio), Deutsche Bank, Reserve Bank of India, and EY. FRM, PMP, GCP certified. Patent holder (EP26162901.8). Published researcher (SSRN 6359818). Building traceable, auditable AI for regulated industries.