1. Who Qualifies as a Foundation Model Provider
The EU AI Act does not use the term "foundation model" in its final text. Instead, it defines general-purpose AI (GPAI) models in Article 3(63) as models that display significant generality, are capable of competently performing a wide range of distinct tasks, and can be integrated into a variety of downstream systems or applications. In practice, this captures what the industry calls foundation models, large language models, and multimodal base models.
Provider Definition Criteria
| Criterion | Description | Examples |
|---|---|---|
| Develops the model | Trains or substantially modifies a GPAI model | OpenAI, Anthropic, Mistral, Meta (LLaMA) |
| Places on EU market | Makes available in the EU, regardless of HQ location | US company offering API access to EU customers |
| Significant generality | Model performs diverse tasks, not narrow AI | LLMs, multimodal models, code generation models |
| Fine-tuning threshold | Fine-tuning that materially changes capabilities creates a new provider | Company fine-tuning LLaMA into a specialized medical model |
| API distribution | Providing model access via API counts as placing on market | Cloud AI services with EU endpoints |
| Weight distribution | Releasing downloadable weights counts as placing on market | Open-source model releases on Hugging Face |
Critical Nuance: A company that fine-tunes an existing GPAI model may become a new GPAI provider if the fine-tuning substantially modifies the model. The threshold is not precisely defined in the regulation, but Recital 97 indicates that fine-tuning for a narrow task does not create a new GPAI model, while fine-tuning that materially changes the model's capabilities or intended purpose likely does.
2. Supply Chain Obligations: Upstream to Downstream
The EU AI Act establishes a three-tier responsibility model for the AI supply chain. Each tier has distinct obligations, but they are interconnected: upstream compliance enables downstream compliance.
Tier 1: GPAI Model Provider (Upstream)
Develops or substantially modifies the foundation model. Bears Article 53 transparency obligations and, if applicable, Article 55 systemic risk obligations.
- Technical documentation (Annex XI)
- Training data summary (AI Office template)
- Copyright compliance policy
- Downstream provider information packages
Tier 2: AI System Provider (Integrator)
Integrates the GPAI model into a specific AI system. Bears Articles 9-15 obligations for high-risk systems. Depends on upstream documentation.
- Risk management system (Article 9)
- Data governance (Article 10)
- Technical documentation (Article 11)
- Transparency to deployers (Article 13)
- Human oversight design (Article 14)
Tier 3: Deployer (Downstream)
Uses the AI system in operational context. Bears Article 26 deployer obligations. Depends on instructions and documentation from provider.
- Use per instructions (Article 26(1))
- Human oversight assignment (Article 26(2))
- Input data relevance (Article 26(4))
- Monitoring and incident reporting (Article 26(5))
This tiered structure means foundation model providers enable or constrain the entire chain. A GPAI provider that delivers incomplete documentation forces downstream integrators into a compliance gap they cannot close independently.
3. Documentation That Must Flow to Deployers
Article 53(1)(b) creates the information flow obligation. The documentation must be sufficient for downstream providers to understand capabilities, limitations, and risks and to comply with their own regulatory obligations.
Required Documentation Categories
| Category | Content | Enables Downstream Compliance With |
|---|---|---|
| Model Identity | Version, architecture type, parameter count, training date, unique identifier | Article 11 (Technical documentation) |
| Capabilities | What the model can do, benchmarks, performance characteristics | Article 9 (Risk assessment) |
| Limitations | Known failure modes, out-of-distribution behavior, language/domain gaps | Article 9 (Risk management) |
| Bias Assessment | Known biases, demographic performance variations, mitigation measures | Article 10 (Data governance) |
| Safety Evaluations | Adversarial test results, harmful output potential, safety guardrails | Article 15 (Accuracy, robustness) |
| Integration Guidelines | How to properly integrate, recommended guardrails, prohibited uses | Article 13 (Transparency), Art. 14 (Human oversight) |
| Data Governance | Training data characteristics, GDPR considerations, copyright status | Article 10 (Data governance) |
| Update Policy | Version release schedule, backward compatibility, deprecation timeline | Article 72 (Post-market monitoring) |
Documentation as a Competitive Advantage
GPAI providers who deliver comprehensive, well-structured documentation will attract downstream integrators who need compliance certainty. In a regulated market, documentation quality becomes a differentiator. Providers who view documentation as a cost center rather than a competitive advantage will lose market share to those who treat it as a feature.
4. Liability Chain Under the AI Act
The EU AI Act creates a distributed liability model where each actor in the supply chain is responsible for their specific obligations. Understanding this chain is critical for foundation model providers because non-compliance at the provider level cascades downstream.
Enforcement and Penalties by Actor
| Actor | Primary Obligations | Maximum Penalty |
|---|---|---|
| GPAI Provider | Article 53 transparency, Article 55 systemic risk | Up to 3% of global annual turnover or EUR 15M |
| AI System Provider | Articles 9-15 high-risk requirements | Up to 3% of global annual turnover or EUR 15M |
| Deployer | Article 26 deployment obligations | Up to 3% of global annual turnover or EUR 15M |
| Any actor (prohibited practices) | Article 5 prohibited AI practices | Up to 7% of global annual turnover or EUR 35M |
Cascading Liability: If a downstream deployer's AI system causes harm because the GPAI provider's documentation failed to disclose a known limitation, enforcement can trace back to the provider. The proposed AI Liability Directive will further strengthen this by establishing a presumption of causality when providers fail to disclose information. This means inadequate documentation is not just a regulatory fine risk — it is a civil liability exposure.
For the full penalties framework, see: EU AI Act Penalties and Enforcement Guide.
5. Joint Obligations: Providers and Deployers
Several EU AI Act requirements create shared responsibilities between GPAI providers and downstream actors. These joint obligations require contractual alignment and ongoing coordination.
Post-Market Monitoring (Article 72)
Providers must establish post-market monitoring systems. Deployers must cooperate by sharing information about incidents, performance degradation, and misuse patterns. GPAI providers must maintain channels for receiving this downstream feedback.
Provider Responsibility
Establish monitoring system, maintain feedback channels
Deployer Responsibility
Report incidents, share performance data, flag misuse
Serious Incident Reporting (Article 73)
When a serious incident occurs, both the deployer (who detected it) and the provider (who must investigate root cause) have reporting obligations to the relevant market surveillance authority.
Provider Responsibility
Investigate root cause, update documentation, notify AI Office
Deployer Responsibility
Report to authorities within 72 hours, preserve evidence
Fundamental Rights Impact Assessment (Article 27)
Deployers of high-risk AI systems must conduct FRIAs before deployment. GPAI providers must provide sufficient information to enable this assessment, including known demographic performance variations and bias profiles.
Provider Responsibility
Disclose demographic performance data, known biases
Deployer Responsibility
Conduct FRIA, document findings, implement mitigations
Transparency to Affected Persons (Article 50)
Persons interacting with AI systems must be informed. This requires coordination: the GPAI provider must disclose what the model generates (e.g., synthetic content), and the deployer must implement user-facing transparency mechanisms.
Provider Responsibility
Provide content marking, synthetic content indicators
Deployer Responsibility
Display AI interaction notices, implement disclosure mechanisms
6. Open-Source Providers: Differential Treatment
The EU AI Act provides a calibrated lighter regime for open-source GPAI models under Article 53(2). This reflects a policy choice to avoid stifling open-source innovation while maintaining baseline transparency.
Open-Source vs. Proprietary Obligations
| Obligation | Proprietary GPAI | Open-Source GPAI | Open-Source + Systemic Risk |
|---|---|---|---|
| Technical documentation (Annex XI) | Required | Lighter version | Full version required |
| Downstream provider information | Required | Exempted (public release suffices) | Required |
| Copyright compliance policy | Required | Required | Required |
| Training data summary | Required | Required | Required |
| Model evaluations (Art. 55) | If systemic risk | N/A | Required |
| Red-teaming | If systemic risk | N/A | Required |
| Incident reporting | If systemic risk | N/A | Required |
| Cybersecurity protections | If systemic risk | N/A | Required |
What Counts as "Open Source" Under the AI Act?
The AI Act defines open-source GPAI models as those whose parameters (weights and architecture) are made publicly available, allowing access, use, modification, and distribution. Importantly, merely publishing weights is not sufficient — the license must permit modification and redistribution. Models with "open weights but restricted use" licenses (like some Meta LLaMA variants) exist in a gray area. The AI Office is expected to provide further guidance.
7. Model Cards and EU Requirements
Model cards, introduced by Mitchell et al. (2019) at Google, have become the de facto industry standard for documenting ML models. The EU AI Act's technical documentation requirements (Annex XI) overlap significantly with model card conventions — but with important legal differences.
Model Card vs. Annex XI: Gap Analysis
| Model Card Field | Annex XI Equivalent | Gap |
|---|---|---|
| Model details (type, version) | General description, version | Annex XI requires training compute (FLOPs) |
| Intended use | Intended tasks | Annex XI requires foreseeable misuse analysis |
| Training data | Training data summary | AI Office template is more prescriptive |
| Evaluation results | Evaluation results | Annex XI requires adversarial testing results |
| Ethical considerations | Safety measures | Annex XI requires specific risk mitigations |
| Limitations | Known limitations | Largely aligned |
| N/A | Copyright compliance | Not in standard model cards |
| N/A | Energy consumption | Not in standard model cards |
The key takeaway: existing model cards are necessary but not sufficient for EU compliance. Providers should augment their model cards with Annex XI-specific sections covering compute resources, copyright compliance, and energy consumption, or maintain separate Annex XI documentation that cross-references the model card.
8. Practical Compliance Checklist
A prioritized checklist for foundation model providers, ordered by regulatory urgency:
Determine Provider Status
Confirm whether your model qualifies as GPAI. Assess if you are 'placing on the EU market.' Determine if systemic risk designation applies (10^25 FLOPs or AI Office designation).
Copyright Compliance
Establish a policy for Directive 2019/790 compliance. Implement automated opt-out reservation detection. Document compliance process and maintain audit logs.
Training Data Summary
Map all training data sources. Populate the AI Office template. Publish the summary publicly. Establish update process for new training runs.
Technical Documentation
Prepare Annex XI documentation. Include compute resources, evaluation results, safety measures. Ensure version control and update procedures.
Downstream Provider Packages
Create standardized documentation packages. Establish a distribution mechanism (portal, API, contractual). Set up notification system for documentation updates.
Contractual Framework
Update terms of service to reflect joint obligations. Define incident reporting workflows with downstream providers. Establish data sharing agreements for post-market monitoring.
Systemic Risk Compliance (if applicable)
Conduct model evaluations and adversarial testing. Perform red-teaming across risk categories. Establish incident reporting to the AI Office. Verify cybersecurity protections.
