1. Article 62: Serious Incident Reporting — Definition and Scope
Article 62 is one of the most operationally demanding provisions in the EU AI Act for organizations that have deployed high-risk AI systems. It establishes a mandatory reporting regime analogous to the medical device serious adverse event reporting framework — and deliberately borrows from the Medical Devices Regulation (MDR 2017/745) in its structure and terminology.
Provider Reporting Obligation
Providers of high-risk AI systems placed on the EU market must report any serious incident to the market surveillance authorities of the member states where the incident occurred. Reporting must be without undue delay and in any case within 15 working days of first becoming aware.
Accelerated Timeline
Where the serious incident involves death or serious and irreversible deterioration of health, the provider must notify the relevant market surveillance authority within 2 working days. A follow-up report containing full details is then required within 10 additional working days.
Deployer Notification Obligation
Deployers who become aware of a serious incident or malfunctioning must immediately inform the provider or distributor. Deployers must also cooperate with competent authorities investigating the incident and provide all relevant documentation.
National Authority Powers
Upon receiving an incident report, national market surveillance authorities may require the provider to take corrective actions, issue warnings to users, withdraw or recall the system from the market, or initiate formal infringement proceedings.
2. The 15-Working-Day Reporting Timeline
The 15-working-day window is one of the most operationally challenging aspects of Article 62. It requires organizations to have pre-established incident response procedures before an incident occurs — not scrambling to build them after the fact.
Provider or deployer becomes aware of a potential serious incident. Awareness starts the clock — even if full investigation is incomplete. Immediate steps: preserve logs and system state, notify internal incident response team, assess preliminary severity.
For incidents involving death or irreversible health damage: notify the national MSA within 2 working days. Provide: system identification, incident description, immediate measures taken, contact person for follow-up. For all other serious incidents: conduct initial severity assessment, preserve evidence, begin internal investigation.
Conduct root cause analysis. Identify affected users and extent of harm. Review system logs (Article 12 logs are critical here). Begin drafting full incident report. Implement immediate corrective measures if technically feasible.
For death/health incidents: submit follow-up report to MSA with full investigation findings. For all other incidents: finalize and submit the initial Article 62 report with complete information.
All serious incident reports must be submitted to national MSA. After submission: continue monitoring, document any ongoing corrective actions, prepare for possible MSA follow-up requests or site inspections.
Working Days vs. Calendar Days: The 15-day window is measured in working days, not calendar days. A serious incident that becomes known on a Friday gives you until the end of business on the 15th working day — which, excluding weekends, is approximately 3 calendar weeks. However, this should not lead to complacency: MSAs can and do scrutinize the timeline from awareness to report, and delays without clear justification are viewed negatively.
3. National Market Surveillance Authorities by EU Member State
Reports must be made to the market surveillance authority of the member state where the incident occurred — not necessarily where the provider is headquartered. For providers operating across multiple member states, this may mean reporting to multiple authorities simultaneously.
| Member State | Designated Authority | Focus Areas |
|---|---|---|
| Germany | Bundesnetzagentur (BNetzA) + sectoral authorities | Telecoms, energy, rail; sectoral MSAs for health, finance |
| France | ANSSI (cybersecurity) + CNIL (data) + DGCCRF (consumer) | Cybersecurity incidents; consumer protection |
| Netherlands | Rijksdienst voor Digitale Infrastructuur (RDI) | Digital infrastructure, consumer electronics |
| Sweden | Swedish Post and Telecom Authority (PTS) | Electronic communications, digital services |
| Poland | Office of Competition and Consumer Protection (UOKiK) | Consumer protection, market competition |
| Spain | AESIA (Agencia Española de Supervisión de IA) | Dedicated AI supervisory authority — pioneer in EU |
| Italy | AGID + sectoral co-supervisors | Digital transformation agency; sectoral overlap |
| Belgium | Centre for Cybersecurity Belgium (CCB) + sectoral | Cybersecurity, critical infrastructure |
Spain's AESIA — established in 2024 as the EU's first dedicated AI supervisory authority — provides the most mature AI incident reporting framework, including a digital reporting portal, incident classification guidance, and published response timelines. Organizations with Spanish operations should establish a direct relationship with AESIA before incidents occur.
4. What Constitutes a “Serious Incident”: Death, Serious Injury, and Rights Violations
The EU AI Act's definition of a serious incident (Article 3(49)) is deliberately threshold-based to avoid flooding MSAs with minor malfunctions. However, applying the definition in practice requires careful judgment across four distinct categories.
Death of a Person
Direct or indirect causal link to AI system operation. Example: autonomous medical dosing system administers incorrect dose leading to patient death. Indirect causation (AI recommendation followed by clinician) still qualifies if AI was the proximate cause.
Zero tolerance — always reportableSerious Health Damage
Serious or irreversible physical or psychological harm. Includes permanent disability, life-threatening injury, and significant psychological trauma. A single affected person is sufficient — this is not a volume threshold.
Reportable — 2-day fast trackFundamental Rights Violation
AI system decision that violates rights under the EU Charter: unfair discrimination in employment, wrongful denial of credit, biometric surveillance without legal basis, violation of privacy rights. Must be a direct consequence of AI system operation.
Reportable — 15-day standard trackCritical Infrastructure Disruption
Serious and irreversible disruption to critical infrastructure (energy, water, transport, banking, health). The disruption must be serious and not easily reversible. Brief service interruptions do not qualify.
Reportable — assess duration and reversibilityWhen in Doubt, Report: The EU AI Act does not penalize over-reporting of incidents that turn out to be below the serious threshold. However, failure to report a genuine serious incident is a violation. Organizations should adopt a conservative assessment posture: if there is genuine uncertainty about whether an incident is serious, treat it as serious and initiate the reporting process while investigation continues.
5. Incident Log Maintenance Requirements (Article 12: Logging)
Article 62 incident reporting depends entirely on the quality of logs maintained under Article 12. High-risk AI systems must automatically generate logs — and those logs must be capable of supporting post-incident reconstruction of events.
The AI system must automatically record events relevant to its operation. For systems with safety implications, logging must be continuous — not sampled or on-demand only.
Period of use (each operational session), reference database used for verification, input data that led to the output, identity of persons involved in verification (where applicable), and any events that constituted anomalies or unusual system behavior.
Logs must be retained for the period specified in the conformity assessment, with a minimum of 6 months for deployers and longer where legally required. For systems in regulated sectors (healthcare, finance), sector-specific retention laws may extend this to 5–10 years.
Logs must be accessible to competent authorities on request. They must be tamper-evident — cryptographic signing or equivalent integrity protection is required to ensure logs cannot be altered after an incident.
Logs must be structured to enable correlation between AI system inputs, outputs, and user actions — allowing investigators to reconstruct the sequence of events leading to a serious incident.
TraceGov.ai's logging module implements Article 12-compliant log generation with SHA-256 cryptographic integrity protection. Every AI decision, input, and output is recorded with an immutable timestamp and evidence chain, ensuring that Article 62 reports can be submitted with the complete evidentiary record regulators require.
6. Post-Incident Analysis and Corrective Action Documentation
Filing an incident report is the beginning, not the end, of Article 62 compliance. Regulators expect structured post-incident analysis that demonstrates the provider has understood the root cause and taken proportionate corrective action.
Conduct systematic analysis of why the incident occurred. Use structured methods: 5 Whys, fault tree analysis, or fishbone diagrams. Map the root cause to specific system components, data inputs, model behavior, or operational procedures. Document whether the cause was foreseeable and whether it should have been identified during pre-deployment testing.
Quantify the full extent of the incident: number of persons affected, severity of harm, geographic scope, duration of exposure. Assess whether similar incidents may have occurred undetected. Review system logs for analogous events in the historical record.
Document all corrective actions taken: software patches, model updates, data corrections, process changes, configuration changes, user communication. Each action must be traceable to a specific root cause finding. Specify the expected timeline for full remediation.
After implementing corrective measures, conduct testing to verify that the root cause has been addressed and the incident cannot recur. Document test methodology and results. For high-risk applications, consider independent third-party verification.
Submit a corrective action report to the relevant MSA (timeline varies by member state, typically 30–90 days after initial report). Include: root cause findings, corrective actions taken, verification results, and any residual risk assessment. Retain all documentation for minimum 10 years.
