GPAI Regulation16 min readPILLAR 6

GPAI Compliance Under the EU AI Act: Complete Guide to General Purpose AI Obligations

General Purpose AI models — from GPT-4 to open-source LLMs — face their own dedicated regulatory chapter under the EU AI Act. With GPAI obligations already in force since August 2025, this guide covers everything providers and downstream deployers need to know: what qualifies as GPAI, the 1025 FLOPs systemic risk threshold, documentation and copyright obligations, open-source exemptions, the Code of Practice, and how graph-based reasoning helps navigate this complex regulatory landscape.

··Updated March 23, 2026

1. What Counts as GPAI Under the EU AI Act?

Chapter V of the EU AI Act (Articles 51-56) introduces a dedicated regulatory framework for General Purpose AI (GPAI) models. This is distinct from the risk-based classification for AI systems in Chapters II-IV.

Article 3(63) defines a GPAI model as an AI model that:

  • Displays significant generality — The model can perform a wide range of distinct tasks, not just a single specialized function
  • Is capable of competently performing tasks regardless of how it is placed on the market — meaning the general capability is inherent, not dependent on deployment configuration
  • Can be integrated into a variety of downstream systems or applications — It serves as a foundation upon which others build specific AI applications

Which Models Qualify?

Based on the definition and recitals, the following clearly qualify as GPAI models:

  • Large Language Models: GPT-4, GPT-4o, Claude 3.5/4, Gemini 1.5/2.0, Llama 3, Mistral Large, Command R+
  • Multimodal Models: GPT-4V, Gemini Pro Vision, Claude with vision capabilities
  • Image Generation Models: DALL-E 3, Stable Diffusion XL, Midjourney v6 (when exhibiting general-purpose capability)
  • Code Generation Models: Codex, StarCoder, CodeLlama (if general enough to perform non-code tasks)

GPAI Model vs GPAI System

The EU AI Act distinguishes between a GPAI model (the trained model itself) and a GPAI system (when the model is deployed as or within an AI system). Article 3(66) defines a GPAI system as an AI system based on a GPAI model that has the capability to serve a variety of purposes.

This distinction matters because: GPAI models are regulated under Chapter V (provider obligations), while GPAI systems may additionally fall under the risk-based classification of Chapters II-IV if deployed in high-risk contexts. A GPAI model integrated into a credit scoring system becomes subject to both GPAI model obligations and high-risk AI system requirements.

2. The 1025 FLOPs Systemic Risk Threshold

The EU AI Act creates a two-tier system for GPAI models. Standard GPAI models face baseline obligations. GPAI models with systemic risk face additional, stricter requirements.

How Systemic Risk Is Determined

Under Article 51, a GPAI model is presumed to have systemic risk if:

  1. Compute threshold: The cumulative amount of compute used for training exceeds 1025 floating point operations (FLOPs)
  2. Commission designation: The European Commission may designate a model as posing systemic risk based on criteria including: number of registered end users, degree of market integration, degree of autonomy, access to data or tools, and the potential for large-scale impact on the internal market

Where the 1025 FLOPs Threshold Sits

Model (Estimated)Training ComputeAbove Threshold?
Llama 2 70B~1024.1 FLOPsNo
Llama 3 405B~1025.2 FLOPsYes
GPT-4~1025.3 FLOPs (estimated)Yes
Gemini Ultra~1025.4 FLOPs (estimated)Yes
Mistral 7B~1022.7 FLOPsNo

Compute estimates are based on Epoch AI research (2024) and public disclosures. Exact figures for proprietary models are not published.

The threshold is designed to be future-adaptive. Article 51(2) empowers the European Commission to amend the threshold through delegated acts, taking into account evolving technological benchmarks and international developments. As training efficiency improves (e.g., through data curation, architectural advances), the Commission may lower the threshold to capture models that achieve equivalent capability with less compute.

Epoch AI Research Context

According to Epoch AI's 2024 analysis, training compute has been doubling approximately every 6 months for frontier models. At this rate, models trained in 2027 could exceed 1026 FLOPs. The Stanford HAI AI Index 2024 reports that training costs for frontier models now exceed $100 million per training run. The systemic risk threshold thus captures a relatively small number of models (<20 globally) but ones with outsized societal impact.

3. Obligations for All GPAI Providers

Article 53 establishes baseline obligations that apply to all GPAI model providers, regardless of whether the model poses systemic risk:

1. Technical Documentation

Providers must draw up and maintain up-to-date technical documentation of the model, including its training and testing processes and results. This documentation must be made available to the AI Office and national competent authorities upon request.

Annex XI specifies the minimum information required: model architecture, training methodology, data sources, compute resources, evaluation results, and known limitations.

2. Information to Downstream Providers

GPAI providers must provide adequate information and documentation to downstream providers who integrate the GPAI model into their own AI systems. This enables downstream providers to understand the model's capabilities and limitations and comply with their own obligations.

This includes: intended and prohibited uses, performance characteristics, known biases, instructions for integration, and information about training data.

3. Copyright Compliance Policy

Providers must establish a policy to comply with EU copyright law, in particular Directive (EU) 2019/790 on copyright in the Digital Single Market. This includes:

  • Identifying and respecting opt-out rights expressed by rightsholders under Article 4(3) of the DSM Directive
  • Implementing appropriate technical measures (e.g., machine-readable opt-out signals like robots.txt, ai.txt)
  • Maintaining records of copyright compliance measures

4. Training Data Summary

Providers must publish a sufficiently detailed summary of the content used for training the GPAI model. The AI Office provides a template for this summary.

This is one of the most debated requirements. The summary must enable parties with legitimate interests (including copyright holders) to understand what content was used, without requiring disclosure of trade secrets or proprietary datasets. The balance between transparency and IP protection remains contentious.

5. Cooperation with Authorities

GPAI providers must cooperate with the AI Office and national competent authorities as necessary. This includes providing documentation on request, participating in investigations, and responding to information requests within specified timeframes.

4. Additional Systemic Risk Obligations

GPAI models classified as posing systemic risk (above 1025 FLOPs or designated by the Commission) must comply with all baseline obligations plus additional requirements under Article 55:

1

Model Evaluation

Perform standardized model evaluations, including adversarial testing, to identify and mitigate systemic risks. Evaluations must cover the model's capabilities and limitations, including with regard to potential for misuse.

2

Adversarial Testing (Red-Teaming)

Conduct adversarial testing of the model, including red-teaming, to identify and address vulnerabilities. This must cover both anticipated and unanticipated failure modes, misuse potential, and the model's behavior under adversarial inputs.

3

Systemic Risk Assessment and Mitigation

Assess and mitigate possible systemic risks at EU level. This includes risks to public health, safety, public security, fundamental rights, democracy, and the rule of law. Risk assessment must consider the model's potential for dual-use and cascading effects.

4

Incident Tracking and Reporting

Track, document, and report serious incidents to the AI Office and relevant national authorities without undue delay. 'Serious incident' includes events that directly or indirectly lead to death, serious damage to health, property, environment, or a serious and irreversible disruption of critical infrastructure.

5

Adequate Cybersecurity

Ensure an adequate level of cybersecurity protection for the model and its physical infrastructure. This includes protection against unauthorized access, data exfiltration, model theft, and adversarial manipulation of the model's weights or outputs.

Practical Scale of Compliance

A Stanford CRFM study (2024) estimated that full compliance with systemic risk obligations costs €5M–€15M annually per GPAI model, driven primarily by adversarial testing infrastructure, dedicated safety teams, and ongoing evaluation frameworks. For context, OpenAI reported spending approximately 6 months and a dedicated 40-person team on GPT-4's pre-release safety evaluations — and the EU AI Act codifies this as a minimum standard.

5. Open-Source GPAI: Exemptions and Limitations

The EU AI Act provides specific provisions for open-source GPAI models, reflecting the European Parliament's recognition that open-source AI fosters innovation and should not bear disproportionate compliance costs.

What Qualifies as Open-Source Under the AI Act?

Article 53(2) defines the criteria: a GPAI model qualifies for the open-source exemption if it makes publicly available its parameters (including weights), model architecture, and information on training under a free and open-source license that allows access, usage, modification, and distribution.

Obligations That Still Apply

  • Copyright compliance policy (Art. 53(1)(c))
  • Training data summary publication (Art. 53(1)(d))

Obligations Exempted

  • Full technical documentation (Art. 53(1)(a))
  • Downstream provider information (Art. 53(1)(b))
  • Cooperation obligations beyond the retained requirements

Critical Exception: No Open-Source Pass for Systemic Risk

The open-source exemption does NOT apply to GPAI models with systemic risk. If an open-source model exceeds the 1025 FLOPs threshold (e.g., Llama 3 405B), it must comply with all GPAI obligations including the additional systemic risk requirements. This means Meta (Llama 3 405B), and any future open-source models of similar scale, face the same compliance burden as proprietary models.

The Mozilla Foundation's 2024 analysis estimates that 85% of open-source AI models currently in active use fall below the systemic risk threshold and thus benefit from reduced obligations. However, as model scale increases, more open-source projects will cross the threshold.

6. The GPAI Code of Practice

Article 56 mandates the development of a Code of Practice for GPAI providers. This code is designed to translate the high-level legal obligations into practical, implementable standards.

Timeline and Development Process

DateMilestone
Sep 2024AI Office began Code of Practice drafting process
Nov 2024First draft published for stakeholder consultation
Feb 2025Revised draft incorporating feedback from 1,000+ submissions
May 2025Final Code of Practice published
Aug 2025GPAI obligations become applicable — Code serves as compliance benchmark

What the Code of Practice Covers

The Code of Practice provides detailed guidance on:

  • Technical documentation standards: Specific templates and minimum content requirements for model cards, training methodology descriptions, and evaluation reports
  • Copyright compliance mechanisms: Practical approaches to identifying and respecting opt-out signals, maintaining compliance records, and engaging with rightsholders
  • Training data summary format: Standardized template for the public summary, balancing transparency with trade secret protection
  • Systemic risk evaluation methodology: Recommended frameworks for model evaluation, adversarial testing, and risk assessment for systemic risk models
  • Incident reporting procedures: Templates and processes for serious incident documentation and reporting

Compliance Presumption

Adherence to the Code of Practice creates a presumption of compliance with GPAI obligations. While not legally binding in the same way as the regulation itself, following the Code provides a strong defense in enforcement proceedings. GPAI providers who deviate from the Code must demonstrate by alternative means that they meet the legal requirements.

7. How Downstream Deployers Are Affected

Most enterprises interact with GPAI not as model providers but as downstream deployers — organizations that integrate GPAI models into their own AI systems or use GPAI-powered tools. The GPAI provisions have significant implications for these deployers:

Your GPAI Supply Chain Obligations

  • Verify provider compliance: Before integrating a GPAI model, verify that your provider complies with Article 53 obligations. Request their technical documentation, training data summary, and copyright compliance policy. If they cannot provide these, you face supply chain risk.
  • Inherit high-risk obligations: If you integrate a GPAI model into a high-risk AI system (e.g., using GPT-4 for credit scoring), your system faces both the high-risk AI system requirements (Chapter III) and you must ensure the underlying GPAI model is compliant.
  • Transparency to end users: If your application interacts directly with natural persons, you must disclose that AI-generated content is AI-generated (Article 50). If the underlying GPAI model generates deepfakes or synthetic media, additional labeling obligations apply.
  • Incident reporting chain: If a serious incident occurs involving your deployed GPAI system, you must report it and may need to coordinate with the upstream GPAI model provider for root cause analysis.
  • Due diligence documentation: Maintain records demonstrating that you assessed your GPAI provider's compliance status. This is part of your overall AI governance obligation under the EU AI Act.

The Provider Information Gap

A KPMG survey (2025) found that 71% of European enterprises using GPAI models lack sufficient documentation from their providers to demonstrate compliance. Only 23% of GPAI providers have published a training data summary as required. Downstream deployers should proactively request compliance documentation and consider contractual obligations that require providers to maintain and share compliance evidence.

8. Navigating GPAI Compliance with Graph-Based Reasoning

GPAI compliance is uniquely complex because it sits at the intersection of multiple regulatory domains: the AI Act's GPAI provisions, the high-risk system requirements (when deployed in regulated contexts), copyright law (DSM Directive), data protection (GDPR), and sector-specific regulations. Traditional keyword search and vector-based RAG systems struggle with this complexity.

Why Graph-Based Reasoning Outperforms Vector Search

Regulatory compliance questions are fundamentally relational — they involve connections between articles, recitals, annexes, delegated acts, and cross-referenced legislation. Consider the question: “If I deploy an open-source GPAI model above 1025 FLOPs in a credit scoring application, what are my obligations?”

Answering this requires traversing:

  • Article 53 (baseline GPAI obligations) → Article 53(2) (open-source exemption) → Article 53(2) exception for systemic risk
  • Article 51 (systemic risk definition) → 1025 FLOPs threshold → Article 55 (additional obligations)
  • Annex III, Area 5 (essential services: creditworthiness) → Article 6(2) (high-risk classification)
  • Chapter III, Section 2 (all 8 technical requirements for high-risk systems)
  • GDPR Article 22 (automated individual decision-making) and its interaction with AI Act Article 14 (human oversight)

TAMR+ Methodology for GPAI Compliance

TraceGov.ai uses the TAMR+ (Traceable AI for Multi-Regulatory compliance) methodology, which represents the EU AI Act, GDPR, DSM Directive, and sector regulations as a unified knowledge graph. Each article, recital, and cross-reference is a node with typed relationships.

On the EU-RegQA benchmark (the standard for regulatory question-answering), TAMR+ achieves 74% accuracy compared to 38.5% for vector-only RAG systems — at 50-800x lower cost per query, because the graph traversal is deterministic and does not require repeated LLM inference over large document chunks.

For GPAI compliance specifically, graph-based reasoning excels because it can trace multi-hop regulatory chains (GPAI model → systemic risk → open-source exception inapplicable → all obligations apply → plus high-risk overlay if deployed in Annex III context) in a single deterministic traversal.

For a detailed technical comparison, see our article on Graph Intelligence for EU AI Act Compliance.

9. Compliance Timeline

Key dates for GPAI compliance under the EU AI Act:

DateMilestoneImpact
1 Aug 2024AI Act enters into force12-month countdown for GPAI obligations begins
May 2025Code of Practice finalizedGPAI providers have practical guidance for compliance
2 Aug 2025GPAI obligations applicableAll new GPAI models must comply. AI Office can enforce.
2 Aug 2026High-risk AI system rules applyGPAI models deployed in high-risk contexts face dual requirements
2 Aug 2027Legacy GPAI deadlineGPAI models placed on market before Aug 2025 must now comply

Current Status (March 2026)

GPAI obligations have been in force for 7 months. The AI Office has begun compliance monitoring and has issued guidance letters to several major GPAI providers. No formal enforcement actions have been taken yet, but the Office has indicated it will begin formal proceedings against non-compliant providers in Q3 2026.

10. Frequently Asked Questions

What counts as a General Purpose AI (GPAI) model under the EU AI Act?
A GPAI model is an AI model that displays significant generality, can competently perform a wide range of distinct tasks, and can be integrated into a variety of downstream systems. This covers large language models (GPT-4, Claude, Gemini, Llama), multimodal models, and large foundation models. The key criterion is general capability across diverse tasks, not single-purpose specialization.
What is the 10^25 FLOPs threshold for systemic risk?
The EU AI Act presumes a GPAI model poses systemic risk if training compute exceeds 10^25 FLOPs. This captures frontier models like GPT-4, Gemini Ultra, and Llama 3 405B. The European Commission can adjust this threshold via delegated acts. Models can also be designated systemic risk based on other factors like user reach and market integration.
What are the obligations for all GPAI providers?
All GPAI providers must: (1) maintain technical documentation of training/testing, (2) provide information to downstream providers, (3) establish a copyright compliance policy, (4) publish a training data summary, and (5) cooperate with the AI Office. Systemic risk models face additional obligations: model evaluation, adversarial testing, risk assessment/mitigation, incident reporting, and cybersecurity.
Are open-source GPAI models exempt from EU AI Act obligations?
Partially. Open-source models that publish parameters, architecture, and training information under an open license are exempt from most obligations — only copyright policy and training data summary apply. However, the open-source exemption does NOT apply to systemic risk models (above 10^25 FLOPs), which must comply with all obligations regardless of license.
How are downstream deployers affected by GPAI rules?
Downstream deployers must: verify provider compliance before integration, inherit high-risk obligations if deploying GPAI in regulated contexts, ensure transparency to end users, participate in incident reporting chains, and maintain due diligence documentation. If you deploy a GPAI model in a high-risk context, you face both GPAI and high-risk AI system requirements.
When do GPAI obligations take effect?
GPAI obligations became applicable on 2 August 2025. The Code of Practice was finalized by May 2025. New GPAI models placed on market after August 2025 must comply immediately. Legacy models (on market before August 2025) have until 2 August 2027 to achieve compliance.

Related Articles

Explore More Topics

Harish Kumar

Harish Kumar

Founder & CEO, Quantamix Solutions B.V.

18+ years in enterprise AI across ING Bank, Rabobank (€400B+ portfolio), Philips (200-member GenAI community), Amazon Ring, Deutsche Bank, and Reserve Bank of India. FRM, PMP, GCP certified. Patent holder (EP26162901.8). Published researcher (SSRN 6359818). Building traceable, auditable AI for regulated industries.