The EU AI Act governs what your AI system does. GDPR governs what data it processes. If your AI system handles personal data about EU individuals — which most do — both regulations apply to the same product at the same time.
Two separate fine regimes. Two separate authorities. One product.
Your national data protection authority enforces GDPR: up to €20,000,000 or 4% of global annual turnover (Art. 83). Your national market surveillance authority enforces the EU AI Act: up to €35,000,000 or 7% (Art. 99). A GDPR fine does not reduce your AI Act exposure. Both investigations can run in parallel for the same product.
GDPR
In force since May 2018
GPAI obligations
In force since August 2025
High-risk AI Act
4 months away
Not sure which regulations apply to your AI system?
Regumatrix maps your AI system against the EU AI Act and flags every obligation that fires — risk tier, Annex classification, fine exposure under Article 99, and the specific actions you need to take before the deadlines.
Check my system now — 3 free analyses includedEach overlap below is a situation where both regulations apply to the same action. They are not alternatives — you need to satisfy both.
High-risk AI providers must run data quality and bias-detection practices under Art. 10. GDPR Art. 9 prohibits processing sensitive personal data without a specific lawful basis. Both apply when your training data includes health records, ethnicity data, or biometrics.
Art. 10 — EU AI Act obligation
Art. 10(1)–(4): training, validation and testing datasets must be relevant, representative, examined for bias, and governed appropriately. Art. 10(5): permits processing special category data strictly for bias detection — only where six conditions are met (cannot use other data, pseudonymisation required, data deleted after use, no third-party access, etc.).
GDPR Art. 9 — GDPR obligation
Art. 9(1): processing of special category data (health, biometric, racial origin, etc.) is prohibited unless one of the Art. 9(2) exceptions applies. You must identify a lawful basis — typically research (9(2)(j)), public interest (9(2)(g)), or explicit consent (9(2)(a)) — before any processing begins.
GDPR Art. 22 restricts fully automated decisions that significantly affect people to three permitted situations. AI Act Art. 14 requires high-risk AI systems to be designed so a human can effectively monitor and override them. Both apply to AI that decides about people.
Art. 14 — EU AI Act obligation
Art. 14: providers must build high-risk AI with human-machine interface tools. Oversight persons must be able to understand capabilities and limits, detect anomalies, avoid automation bias, override or disregard the output, and halt the system completely. These are design-time requirements, implemented at deployment.
GDPR Art. 22 — GDPR obligation
Art. 22(1): a decision based solely on automated processing that produces legal effects or significantly affects someone may only be made if: (a) necessary for a contract, (b) authorised by law, or (c) based on explicit consent. When (a) or (c) applies, the controller must provide human review, the right to express a view, and the right to contest.
When you deploy a high-risk AI system AND you are the GDPR data controller for the data it processes, you carry two independent obligation sets. The AI Act explicitly links them: your GDPR DPIA depends on your AI vendor's AI Act compliance.
Art. 26 — EU AI Act obligation
Art. 26: deployers must follow instructions for use, assign qualified human oversight, monitor the system, keep logs for at least 6 months, and inform workers before workplace deployment. Art. 26(9) specifically requires deployers to use the provider's Art. 13 instructions to complete their GDPR Data Protection Impact Assessment.
GDPR Art. 35 — GDPR obligation
Art. 35: processing that is likely to result in high risk to individuals — including systematic evaluation of personal aspects (profiling), decisions with legal effects, and large-scale processing of sensitive data — requires a DPIA before the processing begins. The DPIA must assess the necessity, proportionality, and risks of the processing.
Both laws give individuals a right to explanation when AI makes decisions about them. The AI Act explicitly defers to GDPR where GDPR already covers the situation — but AI Act Art. 86 reaches further, covering high-risk AI decisions that GDPR Art. 22 does not.
Art. 86 — EU AI Act obligation
Art. 86(1): any person subject to an adverse decision by a deployer based on a high-risk AI system (from Annex III) — one that significantly affects their health, safety or fundamental rights — has the right to obtain clear and meaningful explanations of the role of the AI system and the main elements of the decision taken.
GDPR Art. 22(3) — GDPR obligation
Art. 22(3): for automated decisions based on contract or consent, controllers must provide 'meaningful information about the logic involved, as well as the significance and the envisaged consequences' of the processing. This right applies only to decisions 'based solely on automated processing.'
AI Act Art. 50 requires specific in-context disclosures about AI interactions and AI-generated content. GDPR Art. 13/14 requires full disclosures about personal data processing in privacy notices. Both are mandatory and address different moments in the user journey.
Art. 50 — EU AI Act obligation
Art. 50(1): deployers whose AI system directly interacts with natural persons must inform them they are interacting with an AI — unless it is obvious from context. Art. 50(2): deployers generating synthetic audio, image, video or text must label it as AI-generated. Art. 50(3): deployers using emotion recognition or biometric categorisation must inform individuals in advance.
GDPR Art. 13/14 — GDPR obligation
Art. 13 (data collected from the person): at point of collection, provide identity, purpose, legal basis, retention period, and information about any automated decision-making including the logic, significance and consequences. Art. 14 (data collected indirectly): equivalent disclosures must be provided proactively, typically within one month.
Most compliance gaps at the AI Act / GDPR intersection are not deliberate. They come from teams that handled GDPR at launch (before the AI Act existed) and have not reviewed their legal basis and documentation against the new obligations. If any of these apply to you, you need a full review before August 2026:
COM(2025) 837 — the Digital Omnibus II — is a legislative proposal published in March 2025 that amends GDPR directly. It has not been enacted. If it becomes law, two of the five overlaps above change significantly:
New GDPR Art 9(2)(k) — lawful basis for AI training data
837 proposes inserting a new GDPR Art 9(2)(k): processing of special categories of personal data is permitted where “processing [is] in the context of the development and operation of an AI system or AI model,” subject to conditions in a new Art 9(5).
New Art 9(5) conditions: you must implement measures to avoid collecting special categories in training data. If found despite those measures, you must remove them. If removal requires disproportionate effort, you must prevent the data from being used in outputs or disclosed to third parties.
Impact on Overlap 1 (training data): If enacted, this creates a dedicated GDPR lawful basis for AI development — eliminating today's gap where Art 10(5) of the AI Act permits the action but GDPR has no matching basis. Plan compliance for current law while this proposal proceeds.
GDPR Art 22 amendment — contractual necessity clarified
837 replaces GDPR Art 22(1)(a) to read: an automated decision is lawful where it is necessary for entering into or performance of a contract “regardless of whether the decision could be taken otherwise than by solely automated means.”
Under current law, there is a compliance argument that “contractual necessity” does not justify automated processing if a human could have made the same decision. This has created legal uncertainty for credit scoring, insurance pricing, and fraud detection AI. The proposed amendment removes that uncertainty.
The proposal also adds a proportionality rule: where several equally effective automated processing solutions exist, the controller must use the less intrusive one.
Impact on Overlap 2 (automated decisions): If enacted, it would significantly reduce the GDPR burden on contractual automated decisions — particularly in financial services, lending, and insurance. The AI Act Art. 14 human oversight obligation is unchanged and still applies.
Plan compliance for current law. Monitor 837 progress at EUR-Lex — COM(2025) 837 final.
No changes are proposed under COM(2025) 836 for this topic. COM(2025) 837 changes are covered above.
The five overlaps on this page are a starting point. Whether they actually fire for your system depends on what your AI does, what data it processes, your role (provider or deployer), and which Annex III domain applies.
Describe your AI system in plain language. Regumatrix returns your risk tier, Annex classification, the exact EU AI Act obligations that apply, your fine exposure under Article 99, and the actions you need to take before August 2026. Eight sections, Article citations, about 30 seconds.
8-section report · Article citations · ~30 seconds · No credit card