RegumatrixBeta
GuidesPathfinderAI RightsFreeAbout
Sign inGet Started Free

Reference

  • All Articles
  • Official Text ↗

Compliance Guides

  • Compliance Timeline
  • High-Risk Checklist
  • Healthcare AI
  • HR & Recruitment
  • Financial Services
  • GPAI / Foundation Models
  • View all guides →

Product

  • Risk Pathfinder
  • AI Rights Check
  • Get Started Free
  • About
  • Feedback
  • Contact

Legal

  • Privacy Policy
  • Terms

Regumatrix — AI compliance powered by Regulation (EU) 2024/1689

This tool is informational only and does not constitute legal advice.

Grounded in Regulation (EU) 2024/1689 · verified 4 Apr 2026
HomeComplianceEU AI Act vs GDPR
Inter-regulation comparison837 — major GDPR changesBoth apply simultaneously

EU AI Act vs GDPR: Key Differences and Overlaps

The EU AI Act and GDPR are separate regulations with different scopes, different obligations, and different enforcement authorities. But for any high-risk AI system that processes personal data about EU residents — which is most of them — both apply. This guide explains the differences, the four major overlap zones, and what COM(2025) 837 changes about the relationship between the two.

The short answer: yes, you need both

GDPR asks: “Can you process this personal data, and how must you protect it?”

The AI Act asks: “Does your AI system meet safety, transparency, and oversight standards?”

A system can be fully GDPR-compliant but breach the AI Act, and vice versa. You need to check both. There is no exemption or waiver — they are independent obligations.

Check whether your AI system triggers high-risk obligations under the AI Act — in addition to your GDPR programme.

Check your AI system in 30 seconds

3 free analyses — no credit card required.

Side-by-side: EU AI Act vs GDPR

AspectGDPREU AI Act
What it regulatesProcessing of personal data of natural personsAI systems placed on the EU market or used in the EU — based on risk to health, safety, or fundamental rights
Who it applies toControllers (decide purpose + means of processing) and processors (process on behalf of controllers)Providers (develop/place on market), deployers (use in professional context), importers, distributors
Territorial scopeProcessing of EU residents' personal data — regardless of where the processor is basedAI systems placed on the EU market OR whose output is used in the EU — affects non-EU providers too
Core obligationsLawful basis, purpose limitation, data minimisation, accuracy, storage limitation, security, DPIA for high-risk processingRisk management, data governance, technical documentation, human oversight, transparency, conformity assessment, post-market monitoring
Data subject / individual rightsAccess, rectification, erasure, portability, objection, automated decision rights (Art 22)Right to explanation of AI decisions (Art 86), right to complain to market surveillance authority (Art 85), transparency disclosures (Art 50)
Penalty maximum€20M or 4% of global annual turnover (whichever higher) for most violations€35M / 7% for prohibited AI; €15M / 3% for high-risk obligation breaches; €7.5M / 1.5% for incorrect info
Supervisory authorityNational Data Protection Authorities (DPAs) — EDPB for cross-border coordinationNational market surveillance authorities + EU AI Office (for GPAI models)
When it appliesApplies now — since May 2018Prohibited practices: February 2025. Most provisions including high-risk obligations: August 2026

The 4 major overlap zones

These are the areas where both regulations impose obligations on the same activity. Getting either one wrong creates exposure under both regimes.

1

Data governance — Art 10 ↔ GDPR Arts 5, 6, 9

Art 10GDPR Arts 5, 6, 9

High-risk AI systems must be trained, validated and tested on data that meets quality, representativeness and bias criteria (AI Act Art 10). At the same time, using personal data for training requires a lawful basis under GDPR Art 6, and special category data (health, biometrics, ethnicity) requires an additional basis under GDPR Art 9. The two obligations are complementary — AI Act Art 10 tells you what quality the data must meet; GDPR Arts 5-9 tell you whether you are allowed to use it.

2

Automated decisions — Art 86 ↔ GDPR Art 22

Art 86GDPR Art 22

GDPR Art 22 governs solely automated decisions with legal or significant effects — giving the data subject the right to human intervention, to object, and to contest. AI Act Art 86 gives any affected person the right to a clear explanation of how the high-risk AI system shaped the decision. Both can be triggered by the same decision. GDPR Art 22 focuses on the automation level; AI Act Art 86 focuses on the AI system's role in any decision by the deployer.

3

DPIAs — Art 26(9) ↔ GDPR Art 35

Art 26GDPR Art 35

Article 26(9) of the AI Act explicitly states that deployers should use the information provided under Art 13 (provider instructions) to comply with their GDPR Art 35 Data Protection Impact Assessment obligation. This creates a direct procedural link: the DPIA required under GDPR for high-risk processing and the conformity assessment required under the AI Act both need to assess the same system — but they measure different things and are reviewed by different authorities.

4

Biometric data — Art 5 prohibitions ↔ GDPR Art 9

Art 5GDPR Art 9

GDPR Art 9 prohibits processing biometric data without an additional legal basis (e.g. explicit consent). The AI Act adds a further layer: certain biometric uses are completely banned under Art 5 (real-time remote face recognition in public for law enforcement) regardless of GDPR consent. GDPR consent to process biometric data does not make an Art 5 banned practice lawful. The AI Act prohibition is absolute. GDPR regulates the data layer; the AI Act regulates the system output layer — both apply.

What dual compliance looks like in practice

For a credit scoring AI system

  • GDPR: lawful basis (contract necessity or consent) for personal data use; DPIA required for large-scale profiling; GDPR Art 22 rights if solely automated decisions.
  • AI Act: Annex III §5 high-risk classification; risk management system (Art 9); technical documentation (Art 11); Art 13 transparency obligations to the deployer; human oversight design (Art 14); conformity assessment (Art 43); Art 86 explanation right for affected persons from August 2026.

For an AI hiring system

  • GDPR: lawful basis for processing CV data and inferred personality traits; explicit basis required if any special category data inferred; GDPR Art 88 employment data rules apply; worker notification rights.
  • AI Act: Annex III §4 high-risk designation; Art 26(7) requires deployers to notify workers and workers' representatives before deployment; full high-risk compliance regime; Art 86 right to explanation for rejected candidates.

For a high-risk AI processing only anonymised data

  • GDPR: if data is truly anonymised (not pseudonymised), GDPR does not apply to the processing of that data — no lawful basis required, no data subject rights triggered.
  • AI Act: GDPR status is irrelevant — if the AI system is listed in Annex III, the full high-risk compliance regime applies regardless of whether personal data is processed. The Art 10 data governance requirements and all other high-risk obligations still apply.

COM(2025) 837 — how it changes the GDPR/AI Act relationship

PROPOSAL — COM(2025) 837Not yet enacted law

New GDPR Art 9(2)(k) — AI training lawful basis for special category data

837 inserts a new explicit lawful basis: processing special category personal data (health, biometrics, ethnicity, etc.) is permitted in the context of developing and operating an AI system or AI model — subject to conditions in new Art 9(5).

Conditions: implement technical and organisational measures to avoid collecting special category data; where found in training/testing datasets, remove it; if removal is disproportionate, isolate it from being used in outputs or disclosed to third parties.

Rewritten GDPR Art 22 — automated contract decisions clarified

837 rewrites Art 22(1) to explicitly state that automated contract decisions are lawful regardless of whether they could have been taken by non-automated means. This directly resolves the GDPR-AI Act tension on automated credit, hiring and insurance decisions. AI Act Art 14/26/86 obligations still apply fully — GDPR clarification of the lawful basis does not reduce the AI Act's human oversight and explanation requirements.

Reminder: 837 GDPR changes do not affect AI Act Article 5 prohibitions

The new GDPR Art 9(2)(k) lawful basis permits processing special category data for AI development. It does not exempt any AI system from EU AI Act Art 5 prohibitions on what that system outputs or does. A biometric categorisation system that infers race or political opinions from biometric data remains banned under Art 5(1)(g) — the GDPR training data basis is irrelevant to the AI Act output prohibition.

Check whether both regimes apply to your AI system

You almost certainly face both if your system does any of the following:

  • Processes any personal data about EU residents to produce its outputs
  • Makes or informs decisions about natural persons in employment, credit, healthcare, or benefits
  • Uses biometric, health, or other special category data in training or inference
  • Outputs a score, ranking, or recommendation that a human or automated process acts on
  • Is used in the EU regardless of where your company is based
Check your AI Act classification and obligations now

Frequently asked questions

Do I need to comply with both the EU AI Act and GDPR?
In most cases, yes. The two regulations have different scopes: GDPR covers any processing of personal data of EU residents, while the EU AI Act covers AI systems placed on or used in the EU market. If your AI system processes personal data about natural persons — which most high-risk AI systems do — both regulations apply simultaneously. GDPR governs whether you can process the data and how. The AI Act governs whether your AI system meets safety, transparency, and oversight standards. There is no 'one or the other' choice.
What is the main difference between EU AI Act and GDPR obligations?
GDPR is data-centric: it focuses on what data you can collect, how you can use it, and what rights data subjects have over their data. Its obligations attach to 'processing' of personal data. The EU AI Act is system-centric: it focuses on what your AI system does, what risks it poses, and whether you have adequate risk management, oversight, and documentation in place. GDPR compliance does not demonstrate AI Act compliance — they measure different things. A system can be GDPR-compliant (lawful data basis, correct consent, DPIA completed) but still fail AI Act high-risk obligations (no conformity assessment, no technical documentation, human oversight not implemented).
Which supervisory authority enforces the EU AI Act vs GDPR?
GDPR is enforced by national Data Protection Authorities (DPAs) in each Member State — the ICO in the UK (post-Brexit national equivalent), CNIL in France, BfDI in Germany, and so on. The EU AI Act is enforced by national market surveillance authorities (designated by each Member State) and, for general-purpose AI models, by the EU AI Office. These are different authorities. A deployer could face simultaneous investigations from a DPA (GDPR) and a market surveillance authority (AI Act) for the same AI system. The AI Act explicitly requires that deployers use the AI system's Art 13 documentation to inform their GDPR Article 35 Data Protection Impact Assessment — but the two authorities remain separate.
How do GDPR and the AI Act interact on biometric data?
The two regulations apply in parallel to biometric data. Under GDPR, biometric data is special category data under Article 9 — you need an explicit legal basis to process it (such as explicit consent or necessity for a legitimate purpose). Under the EU AI Act, biometric AI systems are specifically listed in Annex III §1 as high-risk, triggering the full compliance regime. Some biometric uses are outright banned under AI Act Article 5 regardless of GDPR consent — for example, real-time remote biometric identification in public spaces for law enforcement (with narrow exceptions). Having GDPR consent to process biometric data does not make a prohibited AI Act practice lawful. The AI Act prohibition operates independently.
What does COM(2025) 837 change about the GDPR and AI Act overlap?
COM(2025) 837 proposes two changes most relevant to the GDPR-AI Act overlap. First, it inserts GDPR Article 9(2)(k), creating a new explicit lawful basis for processing special category personal data in the context of developing or operating an AI system — subject to conditions requiring controllers to minimise, isolate, and remove such data where possible. This resolves a significant legal gap where AI training on health, biometric, or similar data had an unclear GDPR basis. Second, it rewrites GDPR Article 22(1) and (2) to clarify that automated contract decisions are lawful even if they could theoretically have been taken by non-automated means. Neither change affects EU AI Act obligations — prohibitions under Art 5 still apply regardless of GDPR lawful basis.

Related compliance guides

Automated Decision-Making: GDPR Art 22 + AI Act

Deep-dive on how both regimes govern automated decisions and what 837 changes.

Right to Explanation of AI Decisions (Art 86)

Who has the right, what covers it, and what deployers must provide.

AI Act Data Governance (Article 10)

How AI Act data requirements interact with GDPR data quality obligations.

COM 837 — GDPR & Data Law Changes

All 13 proposed GDPR amendments under COM(2025) 837 explained in full.

Biometric AI Systems

The GDPR Art 9 + AI Act Art 5 biometric overlap — what is banned and what is regulated.

EU AI Act — Complete Guide

The full regulation explained: risk tiers, who it applies to, and timeline.

Check your AI system against both the AI Act and your GDPR programme

Regumatrix checks your AI system against every Annex III high-risk category, all Article 5 prohibitions, and returns your exact risk tier, the obligations that apply, and your fine exposure under Article 99. Cited report. ~30 seconds.

Analyse your AI system

3 free analyses — no credit card required