GPAI & Foundation Models15 min read

Foundation Model Providers: Complete EU AI Act Obligations Guide

The EU AI Act creates a layered responsibility framework where foundation model providers sit at the top of the AI supply chain. Every downstream deployer's compliance depends on the documentation, transparency, and safety guarantees flowing from the model provider. This guide maps every obligation — from Article 53 transparency to systemic risk requirements, open-source carve-outs, liability chains, and the practical documentation that must flow to each actor in the chain.

··Updated April 21, 2026

1. Who Qualifies as a Foundation Model Provider

The EU AI Act does not use the term "foundation model" in its final text. Instead, it defines general-purpose AI (GPAI) models in Article 3(63) as models that display significant generality, are capable of competently performing a wide range of distinct tasks, and can be integrated into a variety of downstream systems or applications. In practice, this captures what the industry calls foundation models, large language models, and multimodal base models.

Provider Definition Criteria

CriterionDescriptionExamples
Develops the modelTrains or substantially modifies a GPAI modelOpenAI, Anthropic, Mistral, Meta (LLaMA)
Places on EU marketMakes available in the EU, regardless of HQ locationUS company offering API access to EU customers
Significant generalityModel performs diverse tasks, not narrow AILLMs, multimodal models, code generation models
Fine-tuning thresholdFine-tuning that materially changes capabilities creates a new providerCompany fine-tuning LLaMA into a specialized medical model
API distributionProviding model access via API counts as placing on marketCloud AI services with EU endpoints
Weight distributionReleasing downloadable weights counts as placing on marketOpen-source model releases on Hugging Face

Critical Nuance: A company that fine-tunes an existing GPAI model may become a new GPAI provider if the fine-tuning substantially modifies the model. The threshold is not precisely defined in the regulation, but Recital 97 indicates that fine-tuning for a narrow task does not create a new GPAI model, while fine-tuning that materially changes the model's capabilities or intended purpose likely does.

2. Supply Chain Obligations: Upstream to Downstream

The EU AI Act establishes a three-tier responsibility model for the AI supply chain. Each tier has distinct obligations, but they are interconnected: upstream compliance enables downstream compliance.

Tier 1: GPAI Model Provider (Upstream)

Develops or substantially modifies the foundation model. Bears Article 53 transparency obligations and, if applicable, Article 55 systemic risk obligations.

  • Technical documentation (Annex XI)
  • Training data summary (AI Office template)
  • Copyright compliance policy
  • Downstream provider information packages
Documentation & information flows down

Tier 2: AI System Provider (Integrator)

Integrates the GPAI model into a specific AI system. Bears Articles 9-15 obligations for high-risk systems. Depends on upstream documentation.

  • Risk management system (Article 9)
  • Data governance (Article 10)
  • Technical documentation (Article 11)
  • Transparency to deployers (Article 13)
  • Human oversight design (Article 14)
Instructions for use flow down

Tier 3: Deployer (Downstream)

Uses the AI system in operational context. Bears Article 26 deployer obligations. Depends on instructions and documentation from provider.

  • Use per instructions (Article 26(1))
  • Human oversight assignment (Article 26(2))
  • Input data relevance (Article 26(4))
  • Monitoring and incident reporting (Article 26(5))

This tiered structure means foundation model providers enable or constrain the entire chain. A GPAI provider that delivers incomplete documentation forces downstream integrators into a compliance gap they cannot close independently.

3. Documentation That Must Flow to Deployers

Article 53(1)(b) creates the information flow obligation. The documentation must be sufficient for downstream providers to understand capabilities, limitations, and risks and to comply with their own regulatory obligations.

Required Documentation Categories

CategoryContentEnables Downstream Compliance With
Model IdentityVersion, architecture type, parameter count, training date, unique identifierArticle 11 (Technical documentation)
CapabilitiesWhat the model can do, benchmarks, performance characteristicsArticle 9 (Risk assessment)
LimitationsKnown failure modes, out-of-distribution behavior, language/domain gapsArticle 9 (Risk management)
Bias AssessmentKnown biases, demographic performance variations, mitigation measuresArticle 10 (Data governance)
Safety EvaluationsAdversarial test results, harmful output potential, safety guardrailsArticle 15 (Accuracy, robustness)
Integration GuidelinesHow to properly integrate, recommended guardrails, prohibited usesArticle 13 (Transparency), Art. 14 (Human oversight)
Data GovernanceTraining data characteristics, GDPR considerations, copyright statusArticle 10 (Data governance)
Update PolicyVersion release schedule, backward compatibility, deprecation timelineArticle 72 (Post-market monitoring)

Documentation as a Competitive Advantage

GPAI providers who deliver comprehensive, well-structured documentation will attract downstream integrators who need compliance certainty. In a regulated market, documentation quality becomes a differentiator. Providers who view documentation as a cost center rather than a competitive advantage will lose market share to those who treat it as a feature.

4. Liability Chain Under the AI Act

The EU AI Act creates a distributed liability model where each actor in the supply chain is responsible for their specific obligations. Understanding this chain is critical for foundation model providers because non-compliance at the provider level cascades downstream.

Enforcement and Penalties by Actor

ActorPrimary ObligationsMaximum Penalty
GPAI ProviderArticle 53 transparency, Article 55 systemic riskUp to 3% of global annual turnover or EUR 15M
AI System ProviderArticles 9-15 high-risk requirementsUp to 3% of global annual turnover or EUR 15M
DeployerArticle 26 deployment obligationsUp to 3% of global annual turnover or EUR 15M
Any actor (prohibited practices)Article 5 prohibited AI practicesUp to 7% of global annual turnover or EUR 35M

Cascading Liability: If a downstream deployer's AI system causes harm because the GPAI provider's documentation failed to disclose a known limitation, enforcement can trace back to the provider. The proposed AI Liability Directive will further strengthen this by establishing a presumption of causality when providers fail to disclose information. This means inadequate documentation is not just a regulatory fine risk — it is a civil liability exposure.

For the full penalties framework, see: EU AI Act Penalties and Enforcement Guide.

5. Joint Obligations: Providers and Deployers

Several EU AI Act requirements create shared responsibilities between GPAI providers and downstream actors. These joint obligations require contractual alignment and ongoing coordination.

Post-Market Monitoring (Article 72)

Providers must establish post-market monitoring systems. Deployers must cooperate by sharing information about incidents, performance degradation, and misuse patterns. GPAI providers must maintain channels for receiving this downstream feedback.

Provider Responsibility

Establish monitoring system, maintain feedback channels

Deployer Responsibility

Report incidents, share performance data, flag misuse

Serious Incident Reporting (Article 73)

When a serious incident occurs, both the deployer (who detected it) and the provider (who must investigate root cause) have reporting obligations to the relevant market surveillance authority.

Provider Responsibility

Investigate root cause, update documentation, notify AI Office

Deployer Responsibility

Report to authorities within 72 hours, preserve evidence

Fundamental Rights Impact Assessment (Article 27)

Deployers of high-risk AI systems must conduct FRIAs before deployment. GPAI providers must provide sufficient information to enable this assessment, including known demographic performance variations and bias profiles.

Provider Responsibility

Disclose demographic performance data, known biases

Deployer Responsibility

Conduct FRIA, document findings, implement mitigations

Transparency to Affected Persons (Article 50)

Persons interacting with AI systems must be informed. This requires coordination: the GPAI provider must disclose what the model generates (e.g., synthetic content), and the deployer must implement user-facing transparency mechanisms.

Provider Responsibility

Provide content marking, synthetic content indicators

Deployer Responsibility

Display AI interaction notices, implement disclosure mechanisms

6. Open-Source Providers: Differential Treatment

The EU AI Act provides a calibrated lighter regime for open-source GPAI models under Article 53(2). This reflects a policy choice to avoid stifling open-source innovation while maintaining baseline transparency.

Open-Source vs. Proprietary Obligations

ObligationProprietary GPAIOpen-Source GPAIOpen-Source + Systemic Risk
Technical documentation (Annex XI)RequiredLighter versionFull version required
Downstream provider informationRequiredExempted (public release suffices)Required
Copyright compliance policyRequiredRequiredRequired
Training data summaryRequiredRequiredRequired
Model evaluations (Art. 55)If systemic riskN/ARequired
Red-teamingIf systemic riskN/ARequired
Incident reportingIf systemic riskN/ARequired
Cybersecurity protectionsIf systemic riskN/ARequired

What Counts as "Open Source" Under the AI Act?

The AI Act defines open-source GPAI models as those whose parameters (weights and architecture) are made publicly available, allowing access, use, modification, and distribution. Importantly, merely publishing weights is not sufficient — the license must permit modification and redistribution. Models with "open weights but restricted use" licenses (like some Meta LLaMA variants) exist in a gray area. The AI Office is expected to provide further guidance.

7. Model Cards and EU Requirements

Model cards, introduced by Mitchell et al. (2019) at Google, have become the de facto industry standard for documenting ML models. The EU AI Act's technical documentation requirements (Annex XI) overlap significantly with model card conventions — but with important legal differences.

Model Card vs. Annex XI: Gap Analysis

Model Card FieldAnnex XI EquivalentGap
Model details (type, version)General description, versionAnnex XI requires training compute (FLOPs)
Intended useIntended tasksAnnex XI requires foreseeable misuse analysis
Training dataTraining data summaryAI Office template is more prescriptive
Evaluation resultsEvaluation resultsAnnex XI requires adversarial testing results
Ethical considerationsSafety measuresAnnex XI requires specific risk mitigations
LimitationsKnown limitationsLargely aligned
N/ACopyright complianceNot in standard model cards
N/AEnergy consumptionNot in standard model cards

The key takeaway: existing model cards are necessary but not sufficient for EU compliance. Providers should augment their model cards with Annex XI-specific sections covering compute resources, copyright compliance, and energy consumption, or maintain separate Annex XI documentation that cross-references the model card.

8. Practical Compliance Checklist

A prioritized checklist for foundation model providers, ordered by regulatory urgency:

P0

Determine Provider Status

Confirm whether your model qualifies as GPAI. Assess if you are 'placing on the EU market.' Determine if systemic risk designation applies (10^25 FLOPs or AI Office designation).

P0

Copyright Compliance

Establish a policy for Directive 2019/790 compliance. Implement automated opt-out reservation detection. Document compliance process and maintain audit logs.

P1

Training Data Summary

Map all training data sources. Populate the AI Office template. Publish the summary publicly. Establish update process for new training runs.

P1

Technical Documentation

Prepare Annex XI documentation. Include compute resources, evaluation results, safety measures. Ensure version control and update procedures.

P2

Downstream Provider Packages

Create standardized documentation packages. Establish a distribution mechanism (portal, API, contractual). Set up notification system for documentation updates.

P2

Contractual Framework

Update terms of service to reflect joint obligations. Define incident reporting workflows with downstream providers. Establish data sharing agreements for post-market monitoring.

P3

Systemic Risk Compliance (if applicable)

Conduct model evaluations and adversarial testing. Perform red-teaming across risk categories. Establish incident reporting to the AI Office. Verify cybersecurity protections.

9. Frequently Asked Questions

Who qualifies as a foundation model provider under the EU AI Act?
Any entity that develops a GPAI model and places it on the EU market — whether via API, downloadable weights, or embedded in products. This includes US/non-EU companies serving EU customers. Fine-tuning that materially changes capabilities may also create a new provider.
What documentation must flow from GPAI providers to downstream deployers?
Technical documentation per Annex XI (capabilities, limitations, training methodology, evaluations), bias assessments, safety evaluation results, integration guidelines, and update/versioning policies. This must be sufficient for downstream providers to comply with Articles 9-15.
How are open-source foundation model providers treated?
Open-source GPAI providers receive lighter obligations: exempted from full Annex XI documentation and downstream provider information, but must still comply with copyright obligations and training data summary requirements. Open-source models classified as systemic risk must meet all Article 55 requirements.
What is the liability chain between providers and deployers?
Each tier bears its own obligations. GPAI providers: Article 53 transparency. System providers: Articles 9-15 for high-risk. Deployers: Article 26. Non-compliance cascades — if a deployer fails because the GPAI provider's documentation was inadequate, enforcement traces back. Maximum penalties: 3% of global turnover or EUR 15M.

Explore the GPAI & Foundation Models Cluster

Related Topics

Harish Kumar

Harish Kumar

Founder & CEO, Quantamix Solutions B.V.

18+ years building AI governance frameworks across regulated industries. Former ING Bank (Economic Capital Modeling), Rabobank (IFRS9 Engine, €400B+ portfolio), Philips (200-member GenAI Champions Community), Amazon Ring, Deutsche Bank, and Reserve Bank of India. FRM, PMP, GCP certified. Patent holder (EP26162901.8). Published researcher (SSRN 6359818).