The EU AI Act and GDPR are separate regulations with different scopes, different obligations, and different enforcement authorities. But for any high-risk AI system that processes personal data about EU residents — which is most of them — both apply. This guide explains the differences, the four major overlap zones, and what COM(2025) 837 changes about the relationship between the two.
GDPR asks: “Can you process this personal data, and how must you protect it?”
The AI Act asks: “Does your AI system meet safety, transparency, and oversight standards?”
A system can be fully GDPR-compliant but breach the AI Act, and vice versa. You need to check both. There is no exemption or waiver — they are independent obligations.
Check whether your AI system triggers high-risk obligations under the AI Act — in addition to your GDPR programme.
Check your AI system in 30 seconds3 free analyses — no credit card required.
| Aspect | GDPR | EU AI Act |
|---|---|---|
| What it regulates | Processing of personal data of natural persons | AI systems placed on the EU market or used in the EU — based on risk to health, safety, or fundamental rights |
| Who it applies to | Controllers (decide purpose + means of processing) and processors (process on behalf of controllers) | Providers (develop/place on market), deployers (use in professional context), importers, distributors |
| Territorial scope | Processing of EU residents' personal data — regardless of where the processor is based | AI systems placed on the EU market OR whose output is used in the EU — affects non-EU providers too |
| Core obligations | Lawful basis, purpose limitation, data minimisation, accuracy, storage limitation, security, DPIA for high-risk processing | Risk management, data governance, technical documentation, human oversight, transparency, conformity assessment, post-market monitoring |
| Data subject / individual rights | Access, rectification, erasure, portability, objection, automated decision rights (Art 22) | Right to explanation of AI decisions (Art 86), right to complain to market surveillance authority (Art 85), transparency disclosures (Art 50) |
| Penalty maximum | €20M or 4% of global annual turnover (whichever higher) for most violations | €35M / 7% for prohibited AI; €15M / 3% for high-risk obligation breaches; €7.5M / 1.5% for incorrect info |
| Supervisory authority | National Data Protection Authorities (DPAs) — EDPB for cross-border coordination | National market surveillance authorities + EU AI Office (for GPAI models) |
| When it applies | Applies now — since May 2018 | Prohibited practices: February 2025. Most provisions including high-risk obligations: August 2026 |
These are the areas where both regulations impose obligations on the same activity. Getting either one wrong creates exposure under both regimes.
High-risk AI systems must be trained, validated and tested on data that meets quality, representativeness and bias criteria (AI Act Art 10). At the same time, using personal data for training requires a lawful basis under GDPR Art 6, and special category data (health, biometrics, ethnicity) requires an additional basis under GDPR Art 9. The two obligations are complementary — AI Act Art 10 tells you what quality the data must meet; GDPR Arts 5-9 tell you whether you are allowed to use it.
GDPR Art 22 governs solely automated decisions with legal or significant effects — giving the data subject the right to human intervention, to object, and to contest. AI Act Art 86 gives any affected person the right to a clear explanation of how the high-risk AI system shaped the decision. Both can be triggered by the same decision. GDPR Art 22 focuses on the automation level; AI Act Art 86 focuses on the AI system's role in any decision by the deployer.
Article 26(9) of the AI Act explicitly states that deployers should use the information provided under Art 13 (provider instructions) to comply with their GDPR Art 35 Data Protection Impact Assessment obligation. This creates a direct procedural link: the DPIA required under GDPR for high-risk processing and the conformity assessment required under the AI Act both need to assess the same system — but they measure different things and are reviewed by different authorities.
GDPR Art 9 prohibits processing biometric data without an additional legal basis (e.g. explicit consent). The AI Act adds a further layer: certain biometric uses are completely banned under Art 5 (real-time remote face recognition in public for law enforcement) regardless of GDPR consent. GDPR consent to process biometric data does not make an Art 5 banned practice lawful. The AI Act prohibition is absolute. GDPR regulates the data layer; the AI Act regulates the system output layer — both apply.
New GDPR Art 9(2)(k) — AI training lawful basis for special category data
837 inserts a new explicit lawful basis: processing special category personal data (health, biometrics, ethnicity, etc.) is permitted in the context of developing and operating an AI system or AI model — subject to conditions in new Art 9(5).
Conditions: implement technical and organisational measures to avoid collecting special category data; where found in training/testing datasets, remove it; if removal is disproportionate, isolate it from being used in outputs or disclosed to third parties.
Rewritten GDPR Art 22 — automated contract decisions clarified
837 rewrites Art 22(1) to explicitly state that automated contract decisions are lawful regardless of whether they could have been taken by non-automated means. This directly resolves the GDPR-AI Act tension on automated credit, hiring and insurance decisions. AI Act Art 14/26/86 obligations still apply fully — GDPR clarification of the lawful basis does not reduce the AI Act's human oversight and explanation requirements.
Reminder: 837 GDPR changes do not affect AI Act Article 5 prohibitions
The new GDPR Art 9(2)(k) lawful basis permits processing special category data for AI development. It does not exempt any AI system from EU AI Act Art 5 prohibitions on what that system outputs or does. A biometric categorisation system that infers race or political opinions from biometric data remains banned under Art 5(1)(g) — the GDPR training data basis is irrelevant to the AI Act output prohibition.
You almost certainly face both if your system does any of the following:
Automated Decision-Making: GDPR Art 22 + AI Act
Deep-dive on how both regimes govern automated decisions and what 837 changes.
Right to Explanation of AI Decisions (Art 86)
Who has the right, what covers it, and what deployers must provide.
AI Act Data Governance (Article 10)
How AI Act data requirements interact with GDPR data quality obligations.
COM 837 — GDPR & Data Law Changes
All 13 proposed GDPR amendments under COM(2025) 837 explained in full.
Biometric AI Systems
The GDPR Art 9 + AI Act Art 5 biometric overlap — what is banned and what is regulated.
EU AI Act — Complete Guide
The full regulation explained: risk tiers, who it applies to, and timeline.
Regumatrix checks your AI system against every Annex III high-risk category, all Article 5 prohibitions, and returns your exact risk tier, the obligations that apply, and your fine exposure under Article 99. Cited report. ~30 seconds.
Analyse your AI system3 free analyses — no credit card required