1. Why Both Regulations Apply
The GDPR (Regulation (EU) 2016/679) has been in force since 25 May 2018 and governs the processing of personal data. The EU AI Act (Regulation (EU) 2024/1689) entered into force on 1 August 2024 and governs AI systems. These are not competing frameworks — they regulate different aspects of the same technology stack.
Article 2(7) of the AI Act explicitly states: “This Regulation is without prejudice to Regulation (EU) 2016/679.” This means the AI Act adds requirements on top of GDPR obligations. It does not replace, modify, or override any GDPR provision. An AI system that passes conformity assessment under the AI Act can still violate GDPR if it lacks a lawful basis for personal data processing.
The practical implication: most AI systems that process personal data of EU residents must satisfy both regulatory frameworks simultaneously. According to the European Commission's 2024 impact assessment, an estimated 85% of high-risk AI systems under Annex III process personal data, making dual compliance the norm rather than the exception.
The Two-Lens Model
Think of GDPR as governing the data flowing through your AI system, and the AI Act as governing the system processing that data. Every AI deployment that touches personal data requires examination through both lenses. Ignoring either creates a compliance gap.
2. Side-by-Side Comparison Table
| Dimension | GDPR (2016/679) | EU AI Act (2024/1689) |
|---|---|---|
| Subject matter | Protection of personal data | Rules for AI systems placed on or used in the EU market |
| In force since | 25 May 2018 | 1 Aug 2024 (phased enforcement through Aug 2027) |
| Scope trigger | Processing personal data of EU residents | Placing on EU market or using AI system whose output is used in EU |
| Extraterritorial reach | Yes (Art. 3(2)) | Yes (Art. 2(1)) |
| Risk approach | Risk-based processing obligations (DPIA for high risk) | Four-tier risk classification (prohibited, high, limited, minimal) |
| Impact assessment | Data Protection Impact Assessment (Art. 35) | Fundamental Rights Impact Assessment (Art. 27) |
| Transparency | Privacy notices, right to info (Arts. 13-14), automated decision-making (Art. 22) | Instructions for use (Art. 13), AI interaction disclosure (Art. 50), right to explanation (Art. 86) |
| Max penalty | EUR 20M or 4% global turnover | EUR 35M or 7% global turnover (prohibited practices) |
| Primary enforcer | National Data Protection Authorities (DPAs) | National Market Surveillance Authorities + AI Office |
| Key affected roles | Data controllers, processors | AI providers, deployers, importers, distributors |
| Individual rights | Access, rectification, erasure, portability, objection, automated decisions (Arts. 15-22) | Right to explanation (Art. 86), complaint to authority, right to effective remedy |
| Documentation | Records of Processing Activities (Art. 30) | Technical documentation (Art. 11), EU database registration, conformity declaration |
| Governance role | Data Protection Officer (DPO) | No mandatory role, but AI governance structure implied |
| Bias/discrimination | Implicit via fair processing principle (Art. 5(1)(a)) | Explicit: data governance for bias detection (Art. 10), testing for discriminatory outputs |
3. 14 Cross-Regulation Dependencies
The following dependencies identify specific points where the EU AI Act and GDPR interact, creating dual obligations or shared compliance requirements. Each dependency represents a concrete compliance consideration.
Lawful Basis for Training Data
AI Act Art. 10 requires high-quality training data. GDPR Art. 6 requires a lawful basis for processing personal data. If training data contains personal data, both requirements apply. The AI Act does not create a new lawful basis — you still need consent, legitimate interest, or another Art. 6 ground.
Special Category Data in Training
AI Act Art. 10(5) permits processing special-category data (Art. 9 GDPR: race, health, biometrics) for bias detection and correction in high-risk AI systems, but only subject to appropriate safeguards and GDPR conditions. This creates a narrow exception for bias monitoring.
Data Protection Impact Assessment + Fundamental Rights Impact Assessment
GDPR Art. 35 requires DPIA for high-risk processing. AI Act Art. 27 requires FRIA for high-risk AI deployers. Both are required. Art. 27(4) states the FRIA shall complement the DPIA — they share data protection analysis but the FRIA adds fundamental rights scope.
Transparency Obligations
GDPR Arts. 13-14 require informing data subjects about automated decisions. AI Act Art. 13 requires detailed instructions for use with accuracy levels. AI Act Art. 50 requires AI interaction disclosure. All apply simultaneously — each adds specific disclosure requirements.
Automated Decision-Making
GDPR Art. 22 gives individuals the right not to be subject to solely automated decisions with legal or similarly significant effects, with a right to human intervention. AI Act Art. 14 requires human oversight for high-risk AI. These overlap but are not identical — GDPR gives individual rights while AI Act imposes system design requirements.
Right to Explanation
GDPR Art. 22(3) provides right to obtain human intervention and express viewpoint on automated decisions. AI Act Art. 86 provides right to explanation for high-risk AI decisions affecting rights. Art. 86 goes further by requiring explanation of the role the AI system played in the decision-making procedure.
Data Minimization vs Data Quality
GDPR Art. 5(1)(c) requires data minimization — collect only what is necessary. AI Act Art. 10 requires training data to be relevant, representative, and complete. Tension exists: minimization pushes toward less data, while data quality requirements may push toward more. Resolution requires purpose-specific balancing documented in the DPIA/FRIA.
Data Retention
GDPR Art. 5(1)(e) requires storage limitation. AI Act Art. 12 requires logging and record-keeping throughout the AI system lifecycle. The retention periods must be reconciled: logs needed for AI Act compliance may contain personal data subject to GDPR deletion obligations.
Cross-Border Data Transfers
GDPR Chapter V governs international data transfers. AI Act does not add transfer restrictions but AI system training data, model parameters, and inference data may all involve cross-border flows requiring GDPR transfer safeguards (adequacy decisions, SCCs, BCRs).
Supervisory Authority Coordination
GDPR is enforced by Data Protection Authorities (DPAs). AI Act is enforced by Market Surveillance Authorities (MSAs) and the AI Office. Art. 74 AI Act requires cooperation between MSAs and DPAs. Investigations may involve both authorities — organizations need a single point of contact.
Data Subject Rights + AI-Specific Rights
GDPR provides rights to access, rectification, erasure, and portability of personal data. AI Act provides right to explanation and complaint mechanisms. An individual affected by a high-risk AI decision may exercise both GDPR data access rights (to see the data used) and AI Act explanation rights (to understand the AI's role).
Biometric Data
GDPR Art. 9 classifies biometric data as special category requiring explicit consent or another Art. 9(2) exception. AI Act prohibits certain biometric AI practices (Art. 5) and classifies others as high-risk (Annex III). Both layers of protection apply — the biometric AI system must satisfy GDPR special-category requirements AND AI Act risk-category requirements.
Privacy by Design + AI by Design
GDPR Art. 25 mandates data protection by design and by default. AI Act imposes design requirements for high-risk AI (Arts. 9-15). Both embed compliance at the design stage — organizations should integrate GDPR privacy-by-design and AI Act requirements into a single system design methodology.
Breach Notification + Incident Reporting
GDPR Art. 33 requires personal data breach notification to supervisory authority within 72 hours. AI Act Art. 73 requires serious incident reporting for high-risk AI providers. A single event (e.g., adversarial attack on AI system leaking personal data) may trigger both notification obligations simultaneously.
4. Key Overlap Areas
The following areas represent the strongest overlaps between the two regulations — where compliance work on one framework directly supports the other:
4.1 Impact Assessments
The GDPR DPIA and AI Act FRIA share substantial methodological overlap. Both require: (1) description of the processing/system, (2) assessment of necessity and proportionality, (3) identification of risks to individuals, (4) measures to mitigate those risks. The AI Act explicitly allows the FRIA to build on the DPIA (Art. 27(4)). The EDPB has published guidance (Guidelines 06/2024) recommending a single integrated assessment process.
4.2 Transparency and Disclosure
Both regulations mandate transparency, but at different layers. GDPR requires disclosure about data processing (what data, why, how long, who). The AI Act requires disclosure about system behavior (accuracy levels, limitations, intended purpose, human oversight measures). A unified transparency notice can satisfy both by combining data processing disclosures with AI system capability disclosures.
4.3 Documentation and Record-Keeping
GDPR's Records of Processing Activities (Art. 30) and the AI Act's technical documentation requirements (Art. 11, Annex IV) overlap significantly for AI systems processing personal data. A combined register that tracks both data processing activities and AI system properties eliminates duplication.
4.4 Extraterritorial Scope
Both regulations apply to non-EU organizations. GDPR applies when offering goods/services to EU data subjects or monitoring their behavior (Art. 3(2)). The AI Act applies when placing AI on the EU market or when AI output is used in the EU (Art. 2(1)). A non-EU company serving EU customers with AI-driven services must comply with both, and both require designating an EU representative.
Key Insight
Organizations with mature GDPR compliance programs have a 40-60% head start on AI Act compliance. The governance structures, impact assessment methodologies, documentation practices, and breach notification procedures developed for GDPR are directly transferable with AI-specific extensions.
5. Gaps & Differences
Despite the overlaps, significant gaps exist where one regulation covers ground the other does not:
| Area | GDPR Coverage | AI Act Coverage | Gap |
|---|---|---|---|
| AI safety (non-data) | Not covered | Covered (Arts. 9, 15) | GDPR does not address AI safety unrelated to personal data |
| Data subject erasure | Right to erasure (Art. 17) | No equivalent | AI Act does not address removing individuals from trained models |
| Conformity assessment | Not required | Required for high-risk (Art. 43) | GDPR has no pre-market approval mechanism |
| CE marking | Not applicable | Required for high-risk AI (Art. 48) | Product safety concept with no GDPR parallel |
| Copyright compliance | Not covered | Required for GPAI training data (Art. 53) | AI Act uniquely addresses copyright in training data |
| Model evaluation | Not covered | Required for systemic risk GPAI (Art. 55) | Technical model testing is AI Act-specific |
| DPO appointment | Required in certain cases (Art. 37) | No equivalent role mandated | AI Act does not mandate a specific governance officer |
| Consent withdrawal | Right to withdraw consent (Art. 7(3)) | No equivalent | AI Act does not address post-training consent issues |
6. Combined Compliance Strategy
Organizations should adopt an integrated approach rather than maintaining separate GDPR and AI Act compliance programs. The following strategy minimizes duplication while ensuring both regulations are fully satisfied:
Step 1: Extend GDPR Governance to Include AI
Expand your existing GDPR governance structure (DPO, privacy team, data protection policies) to include AI-specific responsibilities. The DPO should collaborate with the AI governance lead on cross-regulation issues. Create a joint data-and-AI governance committee.
Estimated: 2-4 weeksStep 2: Build a Combined Register
Create a unified register that combines GDPR Records of Processing Activities (Art. 30) with AI Act technical documentation requirements (Art. 11). For each AI system, record both the data processing activities and the AI system properties (purpose, risk level, accuracy metrics, oversight measures).
Estimated: 2-3 weeksStep 3: Unify Impact Assessments
Develop a single impact assessment template that satisfies both GDPR DPIA (Art. 35) and AI Act FRIA (Art. 27). Start with data protection risks, then extend to fundamental rights impacts (discrimination, freedom of expression, human dignity). Document both streams in one assessment.
Estimated: 3-6 weeksStep 4: Consolidate Transparency Notices
Combine GDPR data subject information (Arts. 13-14) with AI Act transparency requirements (Arts. 13, 50) into integrated disclosure documents. For each AI system, users should receive one clear notice covering both data processing and AI system behavior.
Estimated: 2-3 weeksStep 5: Align Breach and Incident Reporting
Create a single incident response procedure that covers both GDPR personal data breach notification (72-hour window, Art. 33) and AI Act serious incident reporting (Art. 73). Map which incidents trigger one or both notification obligations.
Estimated: 1-2 weeksStep 6: Implement Coordinated Testing
Combine GDPR-required data protection testing (anonymization, pseudonymization, access controls) with AI Act-required testing (accuracy, robustness, bias detection, adversarial testing). Run integrated test cycles that cover both dimensions.
Estimated: Ongoing7. Unified Impact Assessment Framework
The following framework satisfies both GDPR DPIA and AI Act FRIA requirements in a single process:
| Assessment Phase | GDPR DPIA Requirement | AI Act FRIA Requirement | Unified Output |
|---|---|---|---|
| 1. System Description | Description of processing operations (Art. 35(7)(a)) | Description of AI system, intended purpose, deployment context (Art. 27) | Combined system and data processing description |
| 2. Necessity & Proportionality | Assessment of necessity and proportionality (Art. 35(7)(b)) | Assessment of deployer's processes affected by AI system | Justification document covering both data processing and AI deployment |
| 3. Risk Identification | Risks to rights and freedoms of data subjects (Art. 35(7)(c)) | Specific risks to fundamental rights (non-discrimination, privacy, expression, dignity) (Art. 27(1)) | Combined risk register covering data protection + fundamental rights |
| 4. Mitigation Measures | Measures to address risks (Art. 35(7)(d)) | Governance measures, human oversight, complaint mechanisms | Unified mitigation plan with both technical and organizational measures |
| 5. Consultation | DPO opinion (Art. 35(2)), prior consultation with DPA if high residual risk (Art. 36) | Involvement of affected groups where feasible (Art. 27(1)(f)) | Combined stakeholder consultation record |
EDPB Guidance
The European Data Protection Board (EDPB) published Guidelines 06/2024 on the interplay between the AI Act and GDPR, specifically recommending integrated assessment processes. The EDPB confirmed that a single assessment document satisfying both Art. 35 GDPR and Art. 27 AI Act is the preferred approach, provided it clearly addresses both data protection and fundamental rights dimensions.
8. Frequently Asked Questions
Do I need to comply with both the EU AI Act and GDPR?▾
How do AI Act impact assessments relate to GDPR DPIAs?▾
Which regulation has stricter transparency requirements?▾
Can I use a single compliance framework for both?▾
Related Articles
The Complete EU AI Act Compliance Guide
Pillar guide covering the full regulation: risk classifications, conformity assessments, and implementation
AI Governance in Europe
Building effective AI governance frameworks for European regulatory requirements
AI Transparency Requirements
Technical and organizational transparency obligations under the EU AI Act
