EU AI Act Compliance12 min read

EU AI Act Compliance Deadline 2025: What Changes in August and What to Do Now

The EU AI Act's phased implementation schedule is not a suggestion — it is a statutory countdown. August 2, 2025 brings GPAI and Chapter 5 obligations into force. August 2, 2026 is the full high-risk compliance deadline. February 2, 2025 already made prohibited practices unlawful. This guide maps every deadline, explains what each means in practice, and provides a concrete 90-day catch-up plan for organizations that need to act now.

··Updated December 2, 2025

1. August 2, 2025: GPAI and Chapter 5 Take Effect

Twelve months after the EU AI Act's entry into force, Chapter V (General-Purpose AI Models) and Article 4 (AI Literacy) became legally applicable. These provisions affect every organization in Europe that uses, integrates, or provides AI systems trained on general-purpose data.

GPAI Provider Documentation (Article 53)

Immediate enforcement risk

Applies to: GPAI model providers

Must have technical documentation (Annex XI), training data summary, and downstream provider transparency measures in place. Providers placing models on the EU market after August 2, 2025 without this documentation are in immediate violation.

Systemic Risk Adversarial Testing (Article 55)

Enforcement risk for large providers

Applies to: Systemic-risk GPAI providers

Must have completed adversarial testing (red-teaming) before or immediately after this date. Testing must cover dangerous capability evaluations, bias at scale, and cybersecurity vulnerabilities. Results must be reported to the EU AI Office.

AI Literacy for All Staff (Article 4)

Medium — documentation required

Applies to: All organizations using AI

Organizations must ensure their staff have a sufficient level of AI literacy, taking into account their tasks and the extent of AI use. This applies to every employee who interacts with, oversees, or makes decisions based on AI systems.

GPAI Deployer Obligations (Article 26 + 50)

Medium — process documentation

Applies to: Organizations deploying GPAI APIs

Deployers using GPAI models in products or services must have Article 50 transparency mechanisms in place, incident reporting procedures established, and post-market monitoring processes documented.

2. August 2, 2026: High-Risk AI Systems Full Compliance Required

The most demanding compliance deadline — full high-risk AI system obligations under Chapters III and IV — takes effect on August 2, 2026. Organizations with high-risk AI systems have until this date to achieve complete conformity, but given the complexity of the requirements, preparation should have started in 2024.

Obligation AreaArticleComplexityLead Time Needed
Risk Management SystemArt. 9High6–12 months
Data and Data GovernanceArt. 10High6–9 months
Technical DocumentationArt. 11Medium3–6 months
Record-Keeping and LoggingArt. 12Medium3–6 months
Transparency and User InformationArt. 13Low–Medium2–4 months
Human Oversight MeasuresArt. 14High6–12 months
Accuracy, Robustness, CybersecurityArt. 15High6–12 months
Conformity AssessmentArt. 43High4–8 months
EU Database RegistrationArt. 49Low1–2 months
Post-Market Monitoring PlanArt. 72Medium3–5 months

Critical Path: Conformity assessment (Article 43) is often the longest-lead activity because it may require engagement with a notified body (for certain high-risk applications) or an external auditor. Notified bodies for AI systems are still being designated across member states — early engagement is essential. Start your conformity assessment process at least 8 months before your August 2026 deadline.

3. February 2, 2025: Prohibited Practices Ban Already in Force

The first and most immediate compliance deadline — February 2, 2025 — has already passed. From this date, Article 5's list of prohibited AI practices became unlawful across all EU member states. Organizations must have ceased these practices immediately.

Subliminal Manipulation Techniques

AI systems that deploy subliminal techniques beyond a person's consciousness to distort their behavior in a way that causes or is likely to cause harm.

Exploitation of Vulnerable Groups

AI targeting specific vulnerabilities of persons due to age, disability, or socioeconomic circumstances to distort behavior harmfully.

Social Scoring by Public Authorities

AI used by public authorities for general-purpose social scoring of natural persons based on social behavior or inferred characteristics.

Real-Time Biometric ID in Public Spaces

Remote real-time biometric identification systems in publicly accessible spaces by law enforcement, except in strictly limited circumstances.

Predictive Policing Based on Profiling

AI systems that assess risk of criminal offence based solely on profiling or personality traits, without objective factual basis.

Facial Recognition Databases via Scraping

Creating or expanding facial recognition databases through untargeted scraping of facial images from the internet or CCTV.

Violations of Article 5 carry the highest penalties in the EU AI Act — up to €35 million or 7% of global annual turnover, whichever is higher. Any organization that has not reviewed its AI portfolio against the prohibited practices list should do so immediately.

4. What Organizations Are Behind and How to Catch Up

Based on TraceGov.ai's TRACE assessments across European organizations, three categories of compliance gaps appear consistently in 2025:

Gap 1: No AI Inventory

68% of organizations

Organizations cannot demonstrate compliance without first knowing what AI systems they operate. Many lack a systematic inventory distinguishing which systems are GPAI deployments, which are high-risk under Annex III, and which are low-risk or excluded. Fix: conduct a structured AI inventory exercise as the first step of any compliance program.

Week 1–2 of 90-day plan

Gap 2: Article 50 Transparency Not Implemented

54% of organizations using GPAI APIs

Organizations using ChatGPT, Claude, or Gemini APIs in customer-facing systems have not implemented user-facing disclosure mechanisms. Fix: implement disclosure banners, metadata labels, and terms-of-service language for all AI-generated customer interactions.

Week 3–6 of 90-day plan

Gap 3: No Incident Response Procedures

81% of organizations

Organizations lack documented procedures for identifying, escalating, and reporting AI incidents. Without this, Article 62 compliance is impossible. Fix: adopt and document incident classification criteria, escalation paths, and MSA contact information before any incident occurs.

Week 4–8 of 90-day plan

5. Prioritization Framework: Which Deadlines Are Most Critical for Your Archetype

Not every organization faces the same compliance urgency. The right prioritization depends on your archetype — your role in the AI value chain and the risk classification of your AI systems.

ArchetypeHighest Priority DeadlineFirst Action
GPAI Provider (commercial)Aug 2025 — NOW overdueAnnex XI documentation + training data summary
GPAI Provider (open-source)Aug 2025 — NOW overdueTransparency policy + community reporting channel
Enterprise GPAI Deployer (customer-facing)Aug 2025 — Article 50 disclosureImplement AI disclosure UI components
High-Risk AI Provider (Annex III)Aug 2026 — 14 months awayStart risk management system documentation now
High-Risk AI DeployerAug 2026 + verify provider complianceRequest provider conformity documentation
General Enterprise (no Annex III)Aug 2025 — Article 4 AI literacyDeploy AI literacy training for all staff

6. 90-Day Catch-Up Plan for Organizations Starting Today

For organizations that are behind on EU AI Act compliance, a structured 90-day program can address the most critical obligations and establish the foundations for ongoing compliance. This plan covers general deployer obligations applicable to most European organizations.

Days 1–14: Foundation

  • Complete AI system inventory — list every AI tool, API, and system in use
  • Classify each system: GPAI deployment, high-risk (Annex III), prohibited, or general purpose
  • Identify Article 5 prohibited practice risks — stop any non-compliant use immediately
  • Assign compliance ownership: AI compliance lead or team responsible for each system

Days 15–30: GPAI Deployer Compliance

  • Implement Article 50 transparency for all customer-facing GPAI systems
  • Draft and publish AI usage disclosure language for terms of service
  • Request training data summaries from GPAI API providers
  • Add copyright indemnification clauses to provider contracts

Days 31–50: Incident Readiness

  • Document incident classification criteria (serious incident threshold analysis)
  • Identify national MSA contacts for each member state of operation
  • Build incident response playbook: detection → internal escalation → MSA notification
  • Configure Article 12 logging for all high-risk and GPAI deployments

Days 51–70: AI Literacy Program

  • Develop role-based AI literacy curriculum for all staff using AI
  • Deliver mandatory training to all employees interacting with AI systems
  • Document training completion for regulatory evidence
  • Brief board and senior leadership on EU AI Act obligations and liability

Days 71–90: Documentation and Gap Closure

  • Complete compliance gap register with risk ratings for each open item
  • Prioritize August 2026 high-risk obligations: begin risk management system for Annex III systems
  • Schedule TRACE score assessment to establish compliance baseline
  • Engage legal counsel to review AI contracts and supplier agreements

TraceGov.ai Acceleration: TraceGov.ai's TAMR+ engine can compress the 90-day plan substantially by automating AI system classification (days 1–14), generating Article 50 disclosure templates (days 15–30), and producing a pre-populated TRACE score report that maps your specific AI portfolio to applicable articles — replacing weeks of manual legal analysis with hours of guided configuration.

7. Frequently Asked Questions About EU AI Act Deadlines

What happens on August 2, 2025 under the EU AI Act?
August 2, 2025 marks the entry into effect of Chapter V (GPAI models) and Article 4 (AI literacy). GPAI providers must have technical documentation and training data summaries in place. Systemic-risk providers must have completed adversarial testing. All organizations must have staff AI literacy measures in place. Deployers using GPAI must have Article 50 transparency mechanisms active.
Are the August 2025 deadlines enforceable?
Yes. The August 2, 2025 deadline is statutory. National authorities and the EU AI Office are empowered to enforce from this date. However, enforcement posture in the initial phase has been described as proportionate — good-faith compliance efforts with documented gaps are viewed more favorably than no action.
What organizations are most at risk of missing the August 2025 deadline?
Three archetypes face highest risk: (1) Organizations deploying GPAI APIs without Article 50 transparency measures; (2) GPAI providers without Annex XI documentation and training data summaries; (3) Any organization that has not conducted Article 4 AI literacy training for staff who interact with AI systems.
Can organizations request an extension to EU AI Act deadlines?
There is no formal extension mechanism. The phased schedule is statutory. Several member states have indicated good-faith compliance efforts — documented roadmaps, contracted support, ongoing self-assessment — will be considered mitigating factors in initial enforcement decisions.
What is the penalty for missing an EU AI Act compliance deadline?
Penalties are tiered: prohibited practice violations up to €35 million or 7% of global turnover; high-risk system obligation violations up to €15 million or 3% of turnover; incorrect information up to €7.5 million or 1.5% of turnover. Missing August 2025 GPAI deadlines falls in the second or third tier depending on violation type.

Explore the EU AI Act Compliance Cluster

Related Topics

Harish Kumar

Harish Kumar

Founder & CEO, Quantamix Solutions B.V.

18+ years building AI governance frameworks across regulated industries. Former ING Bank (Economic Capital Modeling), Rabobank (IFRS9 Engine, €400B+ portfolio), Philips (200-member GenAI Champions Community), Amazon Ring, Deutsche Bank, and Reserve Bank of India. FRM, PMP, GCP certified. Patent holder (EP26162901.8). Published researcher (SSRN 6359818). Creator of TAMR+ methodology (74% vs 38.5% on EU-RegQA benchmark).