1. August 2, 2025: GPAI and Chapter 5 Take Effect
Twelve months after the EU AI Act's entry into force, Chapter V (General-Purpose AI Models) and Article 4 (AI Literacy) became legally applicable. These provisions affect every organization in Europe that uses, integrates, or provides AI systems trained on general-purpose data.
GPAI Provider Documentation (Article 53)
Immediate enforcement riskApplies to: GPAI model providers
Must have technical documentation (Annex XI), training data summary, and downstream provider transparency measures in place. Providers placing models on the EU market after August 2, 2025 without this documentation are in immediate violation.
Systemic Risk Adversarial Testing (Article 55)
Enforcement risk for large providersApplies to: Systemic-risk GPAI providers
Must have completed adversarial testing (red-teaming) before or immediately after this date. Testing must cover dangerous capability evaluations, bias at scale, and cybersecurity vulnerabilities. Results must be reported to the EU AI Office.
AI Literacy for All Staff (Article 4)
Medium — documentation requiredApplies to: All organizations using AI
Organizations must ensure their staff have a sufficient level of AI literacy, taking into account their tasks and the extent of AI use. This applies to every employee who interacts with, oversees, or makes decisions based on AI systems.
GPAI Deployer Obligations (Article 26 + 50)
Medium — process documentationApplies to: Organizations deploying GPAI APIs
Deployers using GPAI models in products or services must have Article 50 transparency mechanisms in place, incident reporting procedures established, and post-market monitoring processes documented.
2. August 2, 2026: High-Risk AI Systems Full Compliance Required
The most demanding compliance deadline — full high-risk AI system obligations under Chapters III and IV — takes effect on August 2, 2026. Organizations with high-risk AI systems have until this date to achieve complete conformity, but given the complexity of the requirements, preparation should have started in 2024.
| Obligation Area | Article | Complexity | Lead Time Needed |
|---|---|---|---|
| Risk Management System | Art. 9 | High | 6–12 months |
| Data and Data Governance | Art. 10 | High | 6–9 months |
| Technical Documentation | Art. 11 | Medium | 3–6 months |
| Record-Keeping and Logging | Art. 12 | Medium | 3–6 months |
| Transparency and User Information | Art. 13 | Low–Medium | 2–4 months |
| Human Oversight Measures | Art. 14 | High | 6–12 months |
| Accuracy, Robustness, Cybersecurity | Art. 15 | High | 6–12 months |
| Conformity Assessment | Art. 43 | High | 4–8 months |
| EU Database Registration | Art. 49 | Low | 1–2 months |
| Post-Market Monitoring Plan | Art. 72 | Medium | 3–5 months |
Critical Path: Conformity assessment (Article 43) is often the longest-lead activity because it may require engagement with a notified body (for certain high-risk applications) or an external auditor. Notified bodies for AI systems are still being designated across member states — early engagement is essential. Start your conformity assessment process at least 8 months before your August 2026 deadline.
3. February 2, 2025: Prohibited Practices Ban Already in Force
The first and most immediate compliance deadline — February 2, 2025 — has already passed. From this date, Article 5's list of prohibited AI practices became unlawful across all EU member states. Organizations must have ceased these practices immediately.
Subliminal Manipulation Techniques
AI systems that deploy subliminal techniques beyond a person's consciousness to distort their behavior in a way that causes or is likely to cause harm.
Exploitation of Vulnerable Groups
AI targeting specific vulnerabilities of persons due to age, disability, or socioeconomic circumstances to distort behavior harmfully.
Social Scoring by Public Authorities
AI used by public authorities for general-purpose social scoring of natural persons based on social behavior or inferred characteristics.
Real-Time Biometric ID in Public Spaces
Remote real-time biometric identification systems in publicly accessible spaces by law enforcement, except in strictly limited circumstances.
Predictive Policing Based on Profiling
AI systems that assess risk of criminal offence based solely on profiling or personality traits, without objective factual basis.
Facial Recognition Databases via Scraping
Creating or expanding facial recognition databases through untargeted scraping of facial images from the internet or CCTV.
Violations of Article 5 carry the highest penalties in the EU AI Act — up to €35 million or 7% of global annual turnover, whichever is higher. Any organization that has not reviewed its AI portfolio against the prohibited practices list should do so immediately.
4. What Organizations Are Behind and How to Catch Up
Based on TraceGov.ai's TRACE assessments across European organizations, three categories of compliance gaps appear consistently in 2025:
Gap 1: No AI Inventory
68% of organizationsOrganizations cannot demonstrate compliance without first knowing what AI systems they operate. Many lack a systematic inventory distinguishing which systems are GPAI deployments, which are high-risk under Annex III, and which are low-risk or excluded. Fix: conduct a structured AI inventory exercise as the first step of any compliance program.
Week 1–2 of 90-day planGap 2: Article 50 Transparency Not Implemented
54% of organizations using GPAI APIsOrganizations using ChatGPT, Claude, or Gemini APIs in customer-facing systems have not implemented user-facing disclosure mechanisms. Fix: implement disclosure banners, metadata labels, and terms-of-service language for all AI-generated customer interactions.
Week 3–6 of 90-day planGap 3: No Incident Response Procedures
81% of organizationsOrganizations lack documented procedures for identifying, escalating, and reporting AI incidents. Without this, Article 62 compliance is impossible. Fix: adopt and document incident classification criteria, escalation paths, and MSA contact information before any incident occurs.
Week 4–8 of 90-day plan5. Prioritization Framework: Which Deadlines Are Most Critical for Your Archetype
Not every organization faces the same compliance urgency. The right prioritization depends on your archetype — your role in the AI value chain and the risk classification of your AI systems.
| Archetype | Highest Priority Deadline | First Action |
|---|---|---|
| GPAI Provider (commercial) | Aug 2025 — NOW overdue | Annex XI documentation + training data summary |
| GPAI Provider (open-source) | Aug 2025 — NOW overdue | Transparency policy + community reporting channel |
| Enterprise GPAI Deployer (customer-facing) | Aug 2025 — Article 50 disclosure | Implement AI disclosure UI components |
| High-Risk AI Provider (Annex III) | Aug 2026 — 14 months away | Start risk management system documentation now |
| High-Risk AI Deployer | Aug 2026 + verify provider compliance | Request provider conformity documentation |
| General Enterprise (no Annex III) | Aug 2025 — Article 4 AI literacy | Deploy AI literacy training for all staff |
6. 90-Day Catch-Up Plan for Organizations Starting Today
For organizations that are behind on EU AI Act compliance, a structured 90-day program can address the most critical obligations and establish the foundations for ongoing compliance. This plan covers general deployer obligations applicable to most European organizations.
Days 1–14: Foundation
- ✓Complete AI system inventory — list every AI tool, API, and system in use
- ✓Classify each system: GPAI deployment, high-risk (Annex III), prohibited, or general purpose
- ✓Identify Article 5 prohibited practice risks — stop any non-compliant use immediately
- ✓Assign compliance ownership: AI compliance lead or team responsible for each system
Days 15–30: GPAI Deployer Compliance
- ✓Implement Article 50 transparency for all customer-facing GPAI systems
- ✓Draft and publish AI usage disclosure language for terms of service
- ✓Request training data summaries from GPAI API providers
- ✓Add copyright indemnification clauses to provider contracts
Days 31–50: Incident Readiness
- ✓Document incident classification criteria (serious incident threshold analysis)
- ✓Identify national MSA contacts for each member state of operation
- ✓Build incident response playbook: detection → internal escalation → MSA notification
- ✓Configure Article 12 logging for all high-risk and GPAI deployments
Days 51–70: AI Literacy Program
- ✓Develop role-based AI literacy curriculum for all staff using AI
- ✓Deliver mandatory training to all employees interacting with AI systems
- ✓Document training completion for regulatory evidence
- ✓Brief board and senior leadership on EU AI Act obligations and liability
Days 71–90: Documentation and Gap Closure
- ✓Complete compliance gap register with risk ratings for each open item
- ✓Prioritize August 2026 high-risk obligations: begin risk management system for Annex III systems
- ✓Schedule TRACE score assessment to establish compliance baseline
- ✓Engage legal counsel to review AI contracts and supplier agreements
TraceGov.ai Acceleration: TraceGov.ai's TAMR+ engine can compress the 90-day plan substantially by automating AI system classification (days 1–14), generating Article 50 disclosure templates (days 15–30), and producing a pre-populated TRACE score report that maps your specific AI portfolio to applicable articles — replacing weeks of manual legal analysis with hours of guided configuration.
