1. Three Enterprise Archetypes
The EU AI Act distributes obligations across the AI value chain. Your compliance program design depends critically on which archetype — or combination of archetypes — describes your organization's role in that chain.
Archetype 1: AI Provider
Develops AI systems and places them on the EU market or puts them into service — under their own name, through distribution channels, or for integration into other products. Carries the primary compliance burden: technical documentation, risk management system, conformity assessment, CE marking, EU database registration, and post-market monitoring.
Examples: AI software vendors, platform providers with AI features, enterprises that build and deploy their own AI systems internally
Archetype 2: AI Deployer
Uses AI systems developed by a provider, under the deployer's own authority. Has a secondary but substantive obligation set: ensure human oversight, monitor performance in operational context, report incidents, maintain logs, conduct DPIA where applicable. Becomes a provider if they substantially modify the AI system or deploy it under their own name.
Examples: Banks using third-party credit scoring AI, hospitals using vendor AI diagnostic tools, HR departments using AI recruitment screening tools
Archetype 3: GPAI Model Provider
Develops and places on the market general-purpose AI models — AI models trained on large amounts of data using self-supervision at scale that display significant generality and can perform a wide range of distinct tasks. Subject to a dedicated obligation regime (Title VIII) that applies from August 2025. Systemic risk GPAI providers face additional obligations including adversarial testing and incident reporting to the European AI Office.
Examples: Large language model developers, foundation model providers, multi-modal AI model providers with broad commercial use
Most organizations are more than one archetype
A typical large enterprise simultaneously acts as a deployer (using vendor AI tools), a provider (building and deploying internal AI systems that affect employees or customers), and potentially a GPAI model provider (if they fine-tune or serve foundation models). Each archetype triggers distinct obligations that must be managed in parallel.
2. AI Provider: Implementation Phases
AI provider implementation follows the 6-phase roadmap described in our EU AI Act Compliance Roadmap. For providers, the heaviest implementation investment falls in phases 3 and 4 — documentation and conformity assessment — which together typically consume 60–70% of the total program budget.
| Phase | Duration | Key Activities | % of Budget |
|---|---|---|---|
| Gap Analysis | 4–8 weeks | AI inventory, classification, gap scoring | 5–10% |
| Governance Setup | 4–6 weeks | Policy, roles, committee structure | 8–12% |
| Documentation | 8–16 weeks | Technical docs, Article 9 RMS, Article 10 data governance | 30–40% |
| Conformity Assessment | 6–12 weeks | Annex VI/VII assessment, CE marking, Declaration of Conformity | 20–30% |
| Registration | 1–2 weeks | EU database registration, Declaration publication | 2–5% |
| Monitoring Program | Ongoing | Post-market monitoring, incident reporting, annual review | 15–25% annually |
3. AI Deployer: Implementation Phases
Deployer implementation is less technically intensive than provider implementation but requires significant organizational change — particularly around human oversight arrangements and incident reporting procedures that most organizations do not currently have in place.
- Step 1 — AI system inventory: Identify all third-party AI systems in use. Request from each vendor: risk classification, technical documentation summary, instructions for use, and incident reporting contact.
- Step 2 — Vendor contract review: Audit existing vendor contracts for EU AI Act compliance clauses. Most contracts signed before 2025 are inadequate. Renegotiate or addend to establish documentation access rights, incident notification obligations, and information provision obligations.
- Step 3 — Human oversight implementation: For each high-risk AI system, document the human oversight measures in place, test override and stop capabilities, and train affected staff.
- Step 4 — Log retention setup: Establish log retention procedures (minimum 6 months for most high-risk systems) and ensure logs are accessible to market surveillance authorities upon request.
- Step 5 — Incident and near-miss reporting: Implement internal incident classification and escalation procedures feeding into the Article 73 reporting chain to the provider.
4. GPAI Model Provider: Implementation Phases
GPAI model provider obligations apply from August 2025. They differ structurally from high-risk AI provider obligations — the primary obligations run toward downstream providers (who integrate the GPAI model into their own systems) and toward the European AI Office, rather than toward end users.
All GPAI Model Providers
Technical documentation for downstream providers; copyright compliance summary; training data summary published; general code of conduct participation
Systemic Risk GPAI Providers (>10²⁵ FLOPs)
All base obligations PLUS: adversarial testing (red-teaming); serious incident reporting to European AI Office within 2 weeks; cybersecurity measures; energy efficiency reporting
5. Resource Requirements and Budget Ranges
Budget requirements vary by organization size, AI footprint, and archetype. The following ranges are based on TraceGov.ai client implementations and industry survey data from the European AI Alliance.
| Organization Profile | Initial Implementation | Annual Maintenance | FTE Requirement |
|---|---|---|---|
| SME, 1–3 high-risk AI systems, deployer only | €50K–€100K | €15K–€30K | 0.5–1 FTE |
| SME, 1–3 high-risk AI systems, provider | €100K–€200K | €30K–€60K | 1–2 FTE |
| Mid-size, 4–10 high-risk AI systems, mixed | €200K–€350K | €60K–€120K | 2–4 FTE |
| Large enterprise, 10+ high-risk AI systems | €350K–€500K | €100K–€200K | 4–8 FTE |
| GPAI model provider (non-systemic risk) | €150K–€300K | €50K–€100K | 2–4 FTE |
| GPAI model provider (systemic risk) | €500K+ | €200K+ | 8+ FTE |
Research Note
Analysis of TraceGov.ai implementation data (SSRN 6359818) shows that organizations using purpose-built compliance platforms reduce initial implementation costs by 35–50% compared to in-house build approaches, primarily through automated documentation generation and reduced legal review cycles. The platform cost is typically recovered within 8–14 months.
6. Build vs Buy Decision Framework
The compliance tooling decision is one of the most consequential early choices in an EU AI Act implementation program. Building in-house provides maximum customization but typically underestimates ongoing maintenance burden. Buying a purpose-built platform provides faster time-to-compliance but requires careful vendor evaluation.
| Decision Factor | Build In-House | Buy Platform |
|---|---|---|
| High-risk AI systems < 5 | Viable with document management system + legal support | Overpowered; simpler tools sufficient |
| High-risk AI systems 5–15 | High maintenance burden; risk of documentation drift | Strong ROI; platform cost offset by FTE savings |
| High-risk AI systems 15+ | Not recommended; complexity exceeds internal capacity | Essential; manual tracking at this scale creates systemic gaps |
| GPAI model obligations | Very high complexity; specialized regulatory expertise required | Recommended; purpose-built GPAI compliance modules available |
| Regulatory change velocity | Manual update processes; high risk of staleness | Platform vendor maintains regulatory currency |
| Audit readiness | Evidence collection is manual and time-consuming | Automated evidence packages; TRACE score exportable |
7. Common Failure Modes in Implementation
Failure Mode 1: Treating compliance as a legal project
Legal teams can review documentation but cannot build a risk management system, implement logging infrastructure, or test human oversight controls. EU AI Act compliance requires engineering, data science, product, and legal to collaborate under a cross-functional program structure from day one.
Failure Mode 2: Underestimating Article 9 complexity
The risk management system requirement is not satisfied by a risk assessment document. Article 9 requires a continuously operating system that identifies risks, estimates them, adopts mitigations, and is tested against actual system performance. Organizations that produce a static risk assessment document and call it their Article 9 system will fail conformity assessment.
Failure Mode 3: No named system owner
High-risk AI systems without a named individual accountable for compliance — not just a role title — consistently exhibit documentation drift. The governance policy must name individuals, and accountability must be reinforced through performance objectives and regular committee reporting.
Failure Mode 4: Vendor supply chain left unresolved
Deployers frequently discover that their AI vendors cannot provide the technical documentation required for Article 26 compliance, or that their contracts do not require vendors to notify them of serious incidents. Vendor contract remediation should begin in Phase 1, not after classification is complete.
Failure Mode 5: Treating the deadline as the end of the project
Organizations that structure their programs as a project with an August 2026 end date will be non-compliant by September 2026. Post-market monitoring, incident reporting, documentation maintenance, and annual governance reviews are permanent operational obligations, not project deliverables.
8. TraceGov.ai Implementation Accelerator
TraceGov.ai provides a structured implementation accelerator designed to compress the provider compliance program from 12–18 months to 4–6 months for organizations with up to 10 high-risk AI systems. The accelerator is built on the TAMR+ (Trace-Augmented Multi-hop Reasoning) knowledge graph methodology, protected under Patent EP26162901.8.
Key accelerator capabilities:
- Automated inventory and classification: Scans connected systems and classifies AI components against Annex III criteria, producing a classification register with documented rationale for each decision.
- Technical documentation generation: Uses TAMR+ to generate Annex IV-compliant technical documentation from existing system artifacts — code repositories, test suites, architecture diagrams — reducing documentation preparation from 8–16 weeks to 2–4 weeks.
- Risk management system scaffolding: Deploys a pre-structured Article 9 risk management system with risk identification templates, mitigation tracking, and residual risk evaluation workflows.
- TRACE score: Continuous compliance health score across all registered AI systems, providing a quantified audit-readiness indicator updated in real time as evidence is added or requirements change.
TAMR+ Benchmark Performance
TAMR+ achieves 74% accuracy on the EU-RegQA benchmark for regulatory question answering, compared to 38.5% for conventional vector-based RAG approaches. This accuracy differential is particularly significant for multi-hop regulatory reasoning tasks — for example, determining which specific Articles apply to a given AI system feature, or identifying the correct conformity assessment pathway for a hybrid system spanning multiple Annex III categories.
9. Frequently Asked Questions
What is the difference between an AI provider and an AI deployer under the EU AI Act?▾
How much does EU AI Act implementation cost for an enterprise?▾
Should organizations build or buy EU AI Act compliance tooling?▾
What are the most common EU AI Act implementation failure modes?▾
Do GPAI model providers face different obligations than high-risk AI providers?▾
Related EU AI Act Guides
How to Comply with the EU AI Act
6-phase implementation roadmap with timelines for 2025–2027 obligations
EU AI Act Conformity Assessment Step-by-Step
Annex VI vs Annex VII pathways, CE marking, and EU database registration
The Complete EU AI Act Compliance Guide
Definitive pillar guide covering all EU AI Act requirements and timelines
