Regulatory AI12 min read

EU AI Act Penalties: What European Companies Risk for Non-Compliance

The EU AI Act establishes the highest administrative fines in EU regulatory history — up to €35 million or 7% of global annual turnover for prohibited practice violations. This article analyzes the three-tier penalty structure, enforcement mechanisms, national authority responsibilities, SME provisions, and draws direct comparisons with GDPR enforcement data to project what AI Act enforcement may look like in practice.

··Updated March 23, 2026

1. Penalty Framework Overview

The EU AI Act (Regulation (EU) 2024/1689) establishes administrative fines in Article 99 and corrective measures in Article 98. The penalty framework is designed to be “effective, proportionate, and dissuasive” (Art. 99(1)) — the same legal standard used in GDPR, but with significantly higher caps.

The regulation uses a three-tier structure where the maximum fine depends on the nature of the violation. Each tier specifies both a fixed Euro amount and a percentage of the preceding financial year's total worldwide annual turnover, whichever is higher. For SMEs and startups, proportionate caps apply.

Record-Setting Penalties

The AI Act's maximum penalties (€35M / 7%) exceed those of any other EU regulation. GDPR caps at €20M / 4%. The Digital Markets Act caps at 10% of global turnover but applies only to designated gatekeepers. The AI Act's penalties apply to any organization, making them the broadest high-cap penalties in EU law.

2. The Three-Tier Fine Structure

Article 99 establishes three tiers of administrative fines, each corresponding to different categories of violations:

Tier 1€35 million or 7% of global annual turnover

Applies to: Violations of prohibited AI practices (Article 5)

This is the highest penalty tier, reserved for the most severe violations. The 8 prohibited practices include:

  • • Subliminal manipulation causing significant harm
  • • Exploitation of vulnerabilities (age, disability, social/economic situation)
  • • Social scoring by public authorities
  • • Individual criminal risk prediction based solely on profiling
  • • Untargeted facial recognition database creation
  • • Emotion recognition in workplaces and educational institutions
  • • Biometric categorization by sensitive attributes (race, religion, sexual orientation)
  • • Real-time remote biometric identification in public spaces (with narrow exceptions)
Tier 2€15 million or 3% of global annual turnover

Applies to: Violations of most other AI Act obligations

This tier covers the bulk of the regulation, including:

  • • High-risk AI system requirements (Articles 8–15): risk management, data governance, documentation, logging, transparency, human oversight, accuracy/robustness/cybersecurity
  • • Provider obligations (Articles 16–22): quality management, conformity assessment, CE marking, registration
  • • Deployer obligations (Article 26): use in accordance with instructions, monitoring, data quality
  • • GPAI obligations (Articles 52–55): technical documentation, copyright compliance, model evaluation (systemic risk)
  • • Transparency obligations (Article 50): AI interaction disclosure, deepfake labelling, AI content marking
  • • Notified body and conformity assessment violations
Tier 3€7.5 million or 1% of global annual turnover

Applies to: Supplying incorrect, incomplete, or misleading information to authorities

This is the lowest tier, targeting organizations that provide false or misleading information to national competent authorities or notified bodies during investigations, audits, or conformity assessment procedures. It covers both active misrepresentation and failure to provide requested information within required timeframes.

TierFixed Amount% TurnoverViolation TypeApplied
1€35,000,0007%Prohibited practices (Art. 5)Whichever is higher
2€15,000,0003%Most other obligationsWhichever is higher
3€7,500,0001%Incorrect information to authoritiesWhichever is higher

3. How Fines Are Calculated

Article 99(7) specifies the criteria national authorities must consider when determining the specific amount of a fine within the applicable tier:

FactorEffect on FineExample
Nature, gravity, and durationCore determinantA system operating for 2 years faces higher fine than one operating for 2 months
Intentional or negligent characterAggravating if intentionalDeliberately concealing a prohibited system vs. inadvertent misclassification
Actions taken to mitigate harmMitigatingPromptly decommissioning a prohibited system upon discovery
Degree of responsibility (design measures)Mitigating or aggravatingHaving a governance framework in place vs. having no AI governance
Previous infringementsAggravatingRepeat violations receive higher fines
Cooperation with authoritiesMitigatingProactively reporting a compliance issue
Size and market shareProportionalitySME vs. large multinational
Number of affected personsAggravating if highAI system affecting millions of EU residents
Harm causedCore determinantDiscriminatory hiring AI causing demonstrable employment harm
GDPR fines already imposed for same conductCoordinationAuthorities must consider whether GDPR penalties were already levied

Double Jeopardy Protection

Article 99(8) provides that where both a GDPR fine and an AI Act fine could apply for the same conduct, the total administrative fine shall not exceed the amount corresponding to the higher of the two. This prevents true double punishment, though both regulatory investigations can proceed independently.

4. Enforcement Mechanism

The AI Act creates a multi-layered enforcement architecture:

4.1 EU Level: The AI Office

The European AI Office, established within the European Commission in February 2024, is responsible for: (1) direct supervision and enforcement of GPAI model obligations, (2) coordinating enforcement across member states, (3) developing standards and guidance, (4) managing the AI regulatory sandboxes framework, and (5) international cooperation on AI governance. The AI Office can impose fines directly on GPAI providers under Articles 101–102, with maximum fines of €15 million or 3% of global turnover for GPAI obligation violations.

4.2 National Level: Market Surveillance Authorities

Each member state must designate at least one market surveillance authority (MSA) and one notifying authority. MSAs have broad investigative powers under Article 74:

  • Access to information — Request access to AI systems, source code, training data, technical documentation, and logs
  • On-site inspections — Conduct unannounced inspections of premises where AI systems are developed, deployed, or stored
  • Testing — Conduct or commission testing of AI systems to verify compliance
  • Corrective measures — Require modification or withdrawal of non-compliant AI systems from the market
  • Administrative fines — Impose fines within the three-tier framework
  • Recall — Order the recall of AI systems that pose risks to health, safety, or fundamental rights

4.3 Coordination: European Artificial Intelligence Board

The European Artificial Intelligence Board (composed of one high-level representative per member state, chaired by the Commission) ensures consistent application of the regulation across the EU. It provides recommendations on enforcement priorities, coordinates joint investigations, and resolves cross-border enforcement conflicts. This mirrors the GDPR's European Data Protection Board (EDPB) model.

5. National Authority Landscape

Member states were required to designate their national competent authorities by 2 August 2025. The choice of authority varies by country — some have assigned responsibility to existing data protection authorities, while others have created new AI-specific bodies or assigned responsibility to existing market surveillance authorities.

CountryDesignated Authority (Primary)Approach
NetherlandsAutoriteit Persoonsgegevens (AP) + Rijksinspectie Digitale Infrastructuur (RDI)Dual: DPA for fundamental rights, RDI for market surveillance
GermanyBundesnetzagentur (BNetzA)Existing telecoms/digital regulator expanded
FranceCNIL + DGCCRFDPA for fundamental rights, consumer protection for market surveillance
SpainAgencia Española de Supervisión de la Inteligencia Artificial (AESIA)New dedicated AI supervisory agency
ItalyAgenzia per l’Italia Digitale (AgID) + Garante PrivacyDigital agency + DPA coordination
IrelandTo be designatedExpected to involve DPCCIA and/or CCPC

The diversity of national approaches means organizations operating across multiple EU markets must track multiple enforcement bodies. Unlike GDPR's one-stop-shop mechanism (which allows designation of a lead DPA based on main establishment), the AI Act does not provide an equivalent single point of contact for cross-border enforcement.

6. SME & Startup Provisions

The AI Act includes explicit protections for small and medium-sized enterprises. Article 99(6) states that for SMEs (including startups), the applicable fine is the lower of the fixed Euro amount or the turnover percentage. This inverts the calculation for large enterprises (where the higher amount applies).

ScenarioAnnual TurnoverTier 1 (7%)Tier 2 (3%)Tier 3 (1%)
Startup (SME)€2M€140K€60K€20K
Small enterprise (SME)€5M€350K€150K€50K
Medium enterprise (SME)€40M€2.8M€1.2M€400K
Large enterprise€200M€35M*€15M*€7.5M*
Major corporation€5B€350M€150M€50M
Big Tech (Meta-scale)€120B€8.4B€3.6B€1.2B

* For large enterprises, the fixed amount applies when it exceeds the turnover percentage. For SMEs, the lower of the two applies (turnover percentage in these examples).

SME Definition

The AI Act uses the standard EU SME definition (Commission Recommendation 2003/361/EC): fewer than 250 employees, annual turnover not exceeding €50 million, or balance sheet total not exceeding €43 million. Startups are included within the SME category. Micro-enterprises (fewer than 10 employees, turnover under €2 million) receive the greatest proportionate benefit.

7. Comparison with GDPR Fines

GDPR enforcement data provides the best available indicator for how AI Act enforcement may develop. The GDPR Enforcement Tracker (maintained by CMS Law, 2024 data) reveals the following patterns:

MetricGDPR (2018–2024)AI Act (Projected)
Maximum penalty cap€20M or 4% turnover€35M or 7% turnover
Largest single fine€1.2B (Meta, Ireland, May 2023)Potentially €8B+ for Big Tech at 7% turnover
Average large fine (top 20)~€35M (GDPR Enforcement Tracker 2024)Potentially 75% higher given increased caps
Total fines issued (6 years)€4.5B+ across ~2,000 decisionsTBD — enforcement beginning
Time to first major fine~8 months (Google, €50M, CNIL, Jan 2019)Estimated late 2025 – early 2026 (prohibited practices)
Most active enforcerSpain (AEPD), Italy (Garante), France (CNIL)Spain (AESIA — first dedicated AI agency)
SME fine proportion~35% of total fines by count, ~2% by valueExpected similar — proportionate caps apply

Key GDPR Enforcement Patterns Relevant to AI Act

  • Enforcement accelerates over time — GDPR fines grew from €56M in 2019 to €2.1B in 2023. AI Act enforcement will likely follow a similar acceleration curve.
  • Big Tech attracts the largest fines — The top 10 GDPR fines were all against technology companies (Meta, Amazon, Google, WhatsApp, TikTok). AI Act enforcement will likely target the same companies for GPAI and prohibited practice violations.
  • Some DPAs are far more active than others — Spain's AEPD issued more GDPR fines than any other authority. Spain has also created the EU's first dedicated AI supervisory agency (AESIA), suggesting early aggressive enforcement.
  • Cross-border cases take years — The Meta €1.2B fine took 5 years of cross-border proceedings. AI Act cross-border cases may face similar delays absent a one-stop-shop mechanism.

8. Non-Financial Consequences

Administrative fines are only one dimension of non-compliance risk. The AI Act also empowers authorities to impose non-financial corrective measures that can be equally damaging:

  • Market withdrawal — Authorities can order removal of non-compliant AI systems from the EU market (Art. 98). For global AI providers, losing access to the EU market of 450 million consumers has severe revenue implications.
  • Product recall — AI systems posing risks to health, safety, or fundamental rights can be recalled from all deployers in the EU.
  • Operational restrictions — Authorities can restrict or prohibit the placing on the market or putting into service of an AI system until compliance is achieved.
  • Public enforcement decisions — Enforcement decisions are typically published, causing reputational damage. GDPR enforcement decisions regularly generate media coverage.
  • Civil liability exposure — The EU AI Liability Directive (proposed) and the revised Product Liability Directive extend liability frameworks to AI systems, creating private enforcement alongside regulatory penalties.
  • Procurement exclusion — Non-compliant AI providers may be excluded from public procurement processes across the EU, particularly in government and critical infrastructure sectors.

9. How to Mitigate Penalty Risk

Based on the fine calculation criteria in Article 99(7), the following measures directly reduce penalty exposure:

1

Establish AI Governance Before Enforcement

Having a governance framework in place demonstrates organizational commitment. Art. 99(7) considers the 'degree of responsibility' — organizations with established governance face lower fines than those with no structures.

Art. 99(7) factor: Degree of responsibility
2

Conduct Proactive AI System Audit

Inventory all AI systems and classify against the risk tiers before any regulatory inquiry. Self-identified issues with documented remediation plans are treated more favorably than issues discovered during investigation.

Art. 99(7) factor: Actions to mitigate harm
3

Implement Incident Response Procedures

Prepare a clear protocol for handling AI Act non-compliance discoveries: immediate cessation of prohibited practices, documented assessment, prompt notification to authorities where required.

Art. 99(7) factor: Cooperation with authorities
4

Document Everything

Maintain comprehensive records of compliance decisions, risk assessments, conformity assessments, and governance actions. Documentation serves as evidence of good-faith compliance efforts.

Art. 99(7) factor: Intentional vs. negligent character
5

Engage Early with Regulatory Sandboxes

Participation in AI regulatory sandboxes demonstrates proactive engagement with regulators. Sandbox participants receive guidance that reduces the risk of inadvertent non-compliance.

Art. 99(7) factor: Actions to mitigate harm
6

Monitor and Respond to Enforcement Trends

Track published enforcement decisions and regulatory guidance. Adapt compliance programs to address areas of regulatory focus. The first enforcement priorities will signal where authorities are looking.

Art. 99(7) factor: Previous infringements prevention

10. Frequently Asked Questions

Who enforces the EU AI Act and how?▾
Enforcement operates at two levels. The European AI Office (within the Commission) coordinates enforcement and directly supervises GPAI models. National market surveillance authorities (MSAs), designated by each of the 27 member states, handle enforcement for all other AI systems. MSAs can request access to systems and source code, conduct on-site inspections, order corrective measures, withdraw products from the market, and impose administrative fines. The European Artificial Intelligence Board coordinates consistent application across the EU.
When are the first EU AI Act fines expected?▾
The first fines are possible now for prohibited practice violations (enforceable since February 2025). Based on the GDPR precedent (first major fine ~8 months after enforcement), the first AI Act fines could emerge in late 2025 or early 2026. For high-risk AI violations, fines cannot begin before August 2026. The AI Office has indicated it will prioritize prohibited practice enforcement initially.
Can SMEs and startups get lower fines?▾
Yes. Article 99(6) provides that for SMEs (including startups), the fine is the lower of the fixed Euro amount or the turnover percentage. An SME with EUR 5M turnover faces a maximum Tier 1 fine of EUR 350K (7% of turnover), not EUR 35M. Additionally, authorities must consider organization size and market share when setting fine amounts. Regulatory sandboxes provide controlled compliance environments. However, proportionate penalties do not eliminate compliance obligations.
How do AI Act fines compare to GDPR fines?▾
AI Act maximum penalties are 75% higher: EUR 35M/7% vs GDPR's EUR 20M/4%. The GDPR Enforcement Tracker (2024) shows average large GDPR fines of approximately EUR 35M, with the largest being EUR 1.2B (Meta, 2023). If AI Act enforcement follows similar patterns, Big Tech could face fines exceeding EUR 1B for prohibited practice violations. The AI Act also adds a third tier (EUR 7.5M/1%) for providing incorrect information, which has no GDPR equivalent.

Related Articles

Harish Kumar

Harish Kumar

Founder & CEO, Quantamix Solutions B.V.

18+ years in enterprise AI across Amazon Ring, Philips (GenAI Champions), ING Bank, Rabobank (€400B+ loan portfolio), Deutsche Bank, Reserve Bank of India, and EY. FRM, PMP, GCP certified. Patent holder (EP26162901.8). Published researcher (SSRN 6359818). Building traceable, auditable AI for regulated industries.