EU AI Act Compliance12 min read

AI Incident Reporting Under the EU AI Act: Obligations, Timelines & Procedures

When a high-risk AI system causes harm, a 15-working-day clock starts running. Article 62 of the EU AI Act creates mandatory incident reporting obligations that most organizations are not yet prepared for. This guide covers the definition of a serious incident, reporting timelines, which national authority to contact in each EU member state, incident log requirements, and how to build post-incident corrective action documentation that satisfies regulators.

··Updated November 25, 2025

1. Article 62: Serious Incident Reporting — Definition and Scope

Article 62 is one of the most operationally demanding provisions in the EU AI Act for organizations that have deployed high-risk AI systems. It establishes a mandatory reporting regime analogous to the medical device serious adverse event reporting framework — and deliberately borrows from the Medical Devices Regulation (MDR 2017/745) in its structure and terminology.

Article 62(1)

Provider Reporting Obligation

Providers of high-risk AI systems placed on the EU market must report any serious incident to the market surveillance authorities of the member states where the incident occurred. Reporting must be without undue delay and in any case within 15 working days of first becoming aware.

Article 62(2)

Accelerated Timeline

Where the serious incident involves death or serious and irreversible deterioration of health, the provider must notify the relevant market surveillance authority within 2 working days. A follow-up report containing full details is then required within 10 additional working days.

Article 72(1)

Deployer Notification Obligation

Deployers who become aware of a serious incident or malfunctioning must immediately inform the provider or distributor. Deployers must also cooperate with competent authorities investigating the incident and provide all relevant documentation.

Article 73

National Authority Powers

Upon receiving an incident report, national market surveillance authorities may require the provider to take corrective actions, issue warnings to users, withdraw or recall the system from the market, or initiate formal infringement proceedings.

2. The 15-Working-Day Reporting Timeline

The 15-working-day window is one of the most operationally challenging aspects of Article 62. It requires organizations to have pre-established incident response procedures before an incident occurs — not scrambling to build them after the fact.

Day 0 (Awareness)

Provider or deployer becomes aware of a potential serious incident. Awareness starts the clock — even if full investigation is incomplete. Immediate steps: preserve logs and system state, notify internal incident response team, assess preliminary severity.

Days 1–2

For incidents involving death or irreversible health damage: notify the national MSA within 2 working days. Provide: system identification, incident description, immediate measures taken, contact person for follow-up. For all other serious incidents: conduct initial severity assessment, preserve evidence, begin internal investigation.

Days 3–10

Conduct root cause analysis. Identify affected users and extent of harm. Review system logs (Article 12 logs are critical here). Begin drafting full incident report. Implement immediate corrective measures if technically feasible.

Day 10–12

For death/health incidents: submit follow-up report to MSA with full investigation findings. For all other incidents: finalize and submit the initial Article 62 report with complete information.

Day 15 (Deadline)

All serious incident reports must be submitted to national MSA. After submission: continue monitoring, document any ongoing corrective actions, prepare for possible MSA follow-up requests or site inspections.

Working Days vs. Calendar Days: The 15-day window is measured in working days, not calendar days. A serious incident that becomes known on a Friday gives you until the end of business on the 15th working day — which, excluding weekends, is approximately 3 calendar weeks. However, this should not lead to complacency: MSAs can and do scrutinize the timeline from awareness to report, and delays without clear justification are viewed negatively.

3. National Market Surveillance Authorities by EU Member State

Reports must be made to the market surveillance authority of the member state where the incident occurred — not necessarily where the provider is headquartered. For providers operating across multiple member states, this may mean reporting to multiple authorities simultaneously.

Member StateDesignated AuthorityFocus Areas
GermanyBundesnetzagentur (BNetzA) + sectoral authoritiesTelecoms, energy, rail; sectoral MSAs for health, finance
FranceANSSI (cybersecurity) + CNIL (data) + DGCCRF (consumer)Cybersecurity incidents; consumer protection
NetherlandsRijksdienst voor Digitale Infrastructuur (RDI)Digital infrastructure, consumer electronics
SwedenSwedish Post and Telecom Authority (PTS)Electronic communications, digital services
PolandOffice of Competition and Consumer Protection (UOKiK)Consumer protection, market competition
SpainAESIA (Agencia Española de Supervisión de IA)Dedicated AI supervisory authority — pioneer in EU
ItalyAGID + sectoral co-supervisorsDigital transformation agency; sectoral overlap
BelgiumCentre for Cybersecurity Belgium (CCB) + sectoralCybersecurity, critical infrastructure

Spain's AESIA — established in 2024 as the EU's first dedicated AI supervisory authority — provides the most mature AI incident reporting framework, including a digital reporting portal, incident classification guidance, and published response timelines. Organizations with Spanish operations should establish a direct relationship with AESIA before incidents occur.

4. What Constitutes a “Serious Incident”: Death, Serious Injury, and Rights Violations

The EU AI Act's definition of a serious incident (Article 3(49)) is deliberately threshold-based to avoid flooding MSAs with minor malfunctions. However, applying the definition in practice requires careful judgment across four distinct categories.

Death of a Person

Direct or indirect causal link to AI system operation. Example: autonomous medical dosing system administers incorrect dose leading to patient death. Indirect causation (AI recommendation followed by clinician) still qualifies if AI was the proximate cause.

Zero tolerance — always reportable

Serious Health Damage

Serious or irreversible physical or psychological harm. Includes permanent disability, life-threatening injury, and significant psychological trauma. A single affected person is sufficient — this is not a volume threshold.

Reportable — 2-day fast track

Fundamental Rights Violation

AI system decision that violates rights under the EU Charter: unfair discrimination in employment, wrongful denial of credit, biometric surveillance without legal basis, violation of privacy rights. Must be a direct consequence of AI system operation.

Reportable — 15-day standard track

Critical Infrastructure Disruption

Serious and irreversible disruption to critical infrastructure (energy, water, transport, banking, health). The disruption must be serious and not easily reversible. Brief service interruptions do not qualify.

Reportable — assess duration and reversibility

When in Doubt, Report: The EU AI Act does not penalize over-reporting of incidents that turn out to be below the serious threshold. However, failure to report a genuine serious incident is a violation. Organizations should adopt a conservative assessment posture: if there is genuine uncertainty about whether an incident is serious, treat it as serious and initiate the reporting process while investigation continues.

5. Incident Log Maintenance Requirements (Article 12: Logging)

Article 62 incident reporting depends entirely on the quality of logs maintained under Article 12. High-risk AI systems must automatically generate logs — and those logs must be capable of supporting post-incident reconstruction of events.

Automatic Log Generation

The AI system must automatically record events relevant to its operation. For systems with safety implications, logging must be continuous — not sampled or on-demand only.

Minimum Log Contents

Period of use (each operational session), reference database used for verification, input data that led to the output, identity of persons involved in verification (where applicable), and any events that constituted anomalies or unusual system behavior.

Retention Period

Logs must be retained for the period specified in the conformity assessment, with a minimum of 6 months for deployers and longer where legally required. For systems in regulated sectors (healthcare, finance), sector-specific retention laws may extend this to 5–10 years.

Access and Integrity

Logs must be accessible to competent authorities on request. They must be tamper-evident — cryptographic signing or equivalent integrity protection is required to ensure logs cannot be altered after an incident.

Incident Correlation

Logs must be structured to enable correlation between AI system inputs, outputs, and user actions — allowing investigators to reconstruct the sequence of events leading to a serious incident.

TraceGov.ai's logging module implements Article 12-compliant log generation with SHA-256 cryptographic integrity protection. Every AI decision, input, and output is recorded with an immutable timestamp and evidence chain, ensuring that Article 62 reports can be submitted with the complete evidentiary record regulators require.

6. Post-Incident Analysis and Corrective Action Documentation

Filing an incident report is the beginning, not the end, of Article 62 compliance. Regulators expect structured post-incident analysis that demonstrates the provider has understood the root cause and taken proportionate corrective action.

Root Cause Analysis

Conduct systematic analysis of why the incident occurred. Use structured methods: 5 Whys, fault tree analysis, or fishbone diagrams. Map the root cause to specific system components, data inputs, model behavior, or operational procedures. Document whether the cause was foreseeable and whether it should have been identified during pre-deployment testing.

Impact Assessment

Quantify the full extent of the incident: number of persons affected, severity of harm, geographic scope, duration of exposure. Assess whether similar incidents may have occurred undetected. Review system logs for analogous events in the historical record.

Corrective Measures

Document all corrective actions taken: software patches, model updates, data corrections, process changes, configuration changes, user communication. Each action must be traceable to a specific root cause finding. Specify the expected timeline for full remediation.

Verification of Effectiveness

After implementing corrective measures, conduct testing to verify that the root cause has been addressed and the incident cannot recur. Document test methodology and results. For high-risk applications, consider independent third-party verification.

MSA Follow-up Report

Submit a corrective action report to the relevant MSA (timeline varies by member state, typically 30–90 days after initial report). Include: root cause findings, corrective actions taken, verification results, and any residual risk assessment. Retain all documentation for minimum 10 years.

7. Frequently Asked Questions

What counts as a serious incident under the EU AI Act?
Article 3(49) defines a serious incident as one that directly or indirectly leads to: (a) death or serious health damage; (b) serious disruption of critical infrastructure; (c) violation of fundamental rights obligations; or (d) property damage exceeding €1 million. Not every malfunction is a serious incident — the threshold is harm, not error.
What is the 15-working-day reporting timeline and when does it start?
The 15-day window opens at the moment of awareness — when the provider or deployer first becomes aware of the incident, not when it occurred. For incidents involving death or irreversible health damage, an initial notification must be made within 2 working days, with a fuller report within 10 additional working days.
Who must report AI incidents under Article 62?
The primary obligation falls on providers of high-risk AI systems. Deployers must immediately notify the provider when aware of a serious incident. The provider then bears responsibility for filing with the national MSA. Open-source providers and importers have additional obligations in certain circumstances.
What information must an AI incident report contain?
While no standard form is mandated, reports should include: incident date and description, AI system identification, consequences and persons affected, immediate corrective measures, and contact information. The report must be consistent with Article 12 logs and technical documentation maintained under Article 11.
Does Article 62 incident reporting apply to GPAI models?
Article 62 applies primarily to high-risk AI systems under Annex III. For systemic-risk GPAI models under Article 51, a separate reporting obligation to the EU AI Office applies. Deployers using GPAI in high-risk applications must comply with Article 62 for the application even if the GPAI provider has separate systemic-risk reporting obligations.

Explore the EU AI Act Compliance Cluster

Related Topics

Harish Kumar

Harish Kumar

Founder & CEO, Quantamix Solutions B.V.

18+ years building AI governance frameworks across regulated industries. Former ING Bank (Economic Capital Modeling), Rabobank (IFRS9 Engine, €400B+ portfolio), Philips (200-member GenAI Champions Community), Amazon Ring, Deutsche Bank, and Reserve Bank of India. FRM, PMP, GCP certified. Patent holder (EP26162901.8). Published researcher (SSRN 6359818). Creator of TAMR+ methodology (74% vs 38.5% on EU-RegQA benchmark).