RegumatrixBeta
GuidesPathfinderAI RightsFreeAbout
Sign inGet Started Free

Reference

  • All Articles
  • Official Text ↗

Compliance Guides

  • Compliance Timeline
  • High-Risk Checklist
  • Healthcare AI
  • HR & Recruitment
  • Financial Services
  • GPAI / Foundation Models
  • View all guides →

Product

  • Risk Pathfinder
  • AI Rights Check
  • Get Started Free
  • About
  • Feedback
  • Contact

Legal

  • Privacy Policy
  • Terms

Regumatrix — AI compliance powered by Regulation (EU) 2024/1689

This tool is informational only and does not constitute legal advice.

Grounded in Regulation (EU) 2024/1689 · verified 4 Apr 2026
HomeComplianceEU AI Act & GDPR
Both laws apply simultaneouslyDual fines: up to €35M/7% + €20M/4%

EU AI Act + GDPR: What You Have to Comply With Under Both Laws

The EU AI Act governs what your AI system does. GDPR governs what data it processes. If your AI system handles personal data about EU individuals — which most do — both regulations apply to the same product at the same time.

Two separate fine regimes. Two separate authorities. One product.

Your national data protection authority enforces GDPR: up to €20,000,000 or 4% of global annual turnover (Art. 83). Your national market surveillance authority enforces the EU AI Act: up to €35,000,000 or 7% (Art. 99). A GDPR fine does not reduce your AI Act exposure. Both investigations can run in parallel for the same product.

GDPR

In force since May 2018

GPAI obligations

In force since August 2025

High-risk AI Act

4 months away

Not sure which regulations apply to your AI system?

Regumatrix maps your AI system against the EU AI Act and flags every obligation that fires — risk tier, Annex classification, fine exposure under Article 99, and the specific actions you need to take before the deadlines.

Check my system now — 3 free analyses included

5 places where the EU AI Act and GDPR overlap

Each overlap below is a situation where both regulations apply to the same action. They are not alternatives — you need to satisfy both.

1

Training AI on personal data

Art. 10GDPR Art. 9

High-risk AI providers must run data quality and bias-detection practices under Art. 10. GDPR Art. 9 prohibits processing sensitive personal data without a specific lawful basis. Both apply when your training data includes health records, ethnicity data, or biometrics.

Art. 10 — EU AI Act obligation

Art. 10(1)–(4): training, validation and testing datasets must be relevant, representative, examined for bias, and governed appropriately. Art. 10(5): permits processing special category data strictly for bias detection — only where six conditions are met (cannot use other data, pseudonymisation required, data deleted after use, no third-party access, etc.).

GDPR Art. 9 — GDPR obligation

Art. 9(1): processing of special category data (health, biometric, racial origin, etc.) is prohibited unless one of the Art. 9(2) exceptions applies. You must identify a lawful basis — typically research (9(2)(j)), public interest (9(2)(g)), or explicit consent (9(2)(a)) — before any processing begins.

How they interact: AI Act Art. 10(5) is a permission to process under the AI Act regime. It does not substitute for a GDPR lawful basis. If you use Art. 10(5) without a GDPR Art. 9(2) basis, you are compliant with the AI Act but still violating GDPR. Both boxes must be checked separately.
2

Automated decision-making

Art. 14GDPR Art. 22

GDPR Art. 22 restricts fully automated decisions that significantly affect people to three permitted situations. AI Act Art. 14 requires high-risk AI systems to be designed so a human can effectively monitor and override them. Both apply to AI that decides about people.

Art. 14 — EU AI Act obligation

Art. 14: providers must build high-risk AI with human-machine interface tools. Oversight persons must be able to understand capabilities and limits, detect anomalies, avoid automation bias, override or disregard the output, and halt the system completely. These are design-time requirements, implemented at deployment.

GDPR Art. 22 — GDPR obligation

Art. 22(1): a decision based solely on automated processing that produces legal effects or significantly affects someone may only be made if: (a) necessary for a contract, (b) authorised by law, or (c) based on explicit consent. When (a) or (c) applies, the controller must provide human review, the right to express a view, and the right to contest.

How they interact: Art. 14 ensures the technical capability for human intervention exists. Art. 22 requires you to actually enable individuals to trigger that intervention. A system meeting Art. 14's design requirements still needs to satisfy Art. 22's lawful basis for each automated decision. Both operate simultaneously.
3

Deployer obligations and DPIA

Art. 26GDPR Art. 35

When you deploy a high-risk AI system AND you are the GDPR data controller for the data it processes, you carry two independent obligation sets. The AI Act explicitly links them: your GDPR DPIA depends on your AI vendor's AI Act compliance.

Art. 26 — EU AI Act obligation

Art. 26: deployers must follow instructions for use, assign qualified human oversight, monitor the system, keep logs for at least 6 months, and inform workers before workplace deployment. Art. 26(9) specifically requires deployers to use the provider's Art. 13 instructions to complete their GDPR Data Protection Impact Assessment.

GDPR Art. 35 — GDPR obligation

Art. 35: processing that is likely to result in high risk to individuals — including systematic evaluation of personal aspects (profiling), decisions with legal effects, and large-scale processing of sensitive data — requires a DPIA before the processing begins. The DPIA must assess the necessity, proportionality, and risks of the processing.

How they interact: If your AI vendor does not give you adequate transparency documentation under Art. 13 (instructions for use with capability/limitation descriptions), you cannot complete a sufficient GDPR DPIA. Your GDPR compliance therefore depends on your AI vendor's AI Act compliance. Audit your vendors against Art. 13 before relying on their systems for high-risk decisions.
4

Right to explanation of AI decisions

Art. 86GDPR Art. 22(3)

Both laws give individuals a right to explanation when AI makes decisions about them. The AI Act explicitly defers to GDPR where GDPR already covers the situation — but AI Act Art. 86 reaches further, covering high-risk AI decisions that GDPR Art. 22 does not.

Art. 86 — EU AI Act obligation

Art. 86(1): any person subject to an adverse decision by a deployer based on a high-risk AI system (from Annex III) — one that significantly affects their health, safety or fundamental rights — has the right to obtain clear and meaningful explanations of the role of the AI system and the main elements of the decision taken.

GDPR Art. 22(3) — GDPR obligation

Art. 22(3): for automated decisions based on contract or consent, controllers must provide 'meaningful information about the logic involved, as well as the significance and the envisaged consequences' of the processing. This right applies only to decisions 'based solely on automated processing.'

How they interact: Art. 86(3) of the AI Act states it 'shall apply only to the extent that the right referred to in paragraph 1 is not otherwise provided for under Union law.' Where GDPR Art. 22(3) already covers a decision, Art. 86 adds nothing new. Where a decision involves human review (removing it from Art. 22's 'solely automated' scope), Art. 86 fills the gap for Annex III systems.
5

Transparency and disclosure to users

Art. 50GDPR Art. 13/14

AI Act Art. 50 requires specific in-context disclosures about AI interactions and AI-generated content. GDPR Art. 13/14 requires full disclosures about personal data processing in privacy notices. Both are mandatory and address different moments in the user journey.

Art. 50 — EU AI Act obligation

Art. 50(1): deployers whose AI system directly interacts with natural persons must inform them they are interacting with an AI — unless it is obvious from context. Art. 50(2): deployers generating synthetic audio, image, video or text must label it as AI-generated. Art. 50(3): deployers using emotion recognition or biometric categorisation must inform individuals in advance.

GDPR Art. 13/14 — GDPR obligation

Art. 13 (data collected from the person): at point of collection, provide identity, purpose, legal basis, retention period, and information about any automated decision-making including the logic, significance and consequences. Art. 14 (data collected indirectly): equivalent disclosures must be provided proactively, typically within one month.

How they interact: These obligations operate on different channels and timings. AI Act Art. 50 is a contextual, in-use disclosure — at the moment of the interaction. GDPR Art. 13/14 is a comprehensive privacy notice obligation. Both are mandatory. Your privacy policy must describe automated decision-making as required by GDPR; your interface must also carry the AI Act Art. 50 disclosure at the point of use.

Warning signals: your AI system almost certainly has a gap in one of these

Most compliance gaps at the AI Act / GDPR intersection are not deliberate. They come from teams that handled GDPR at launch (before the AI Act existed) and have not reviewed their legal basis and documentation against the new obligations. If any of these apply to you, you need a full review before August 2026:

  • Your privacy notice mentions AI-assisted decisions but not the specific logic, significance or consequences
  • You use an AI vendor for high-risk decisions and have not checked if their Art. 13 documentation is sufficient for your GDPR DPIA
  • Your AI system makes credit, hiring, or medical decisions — but you have not identified a lawful basis under GDPR Art. 22
  • Your AI product was assessed for GDPR compliance before August 2024 (when the AI Act entered into force) and has not been re-reviewed
  • Your AI-generated content or chatbot product does not carry a visible Art. 50 disclosure at the point of use
Check my system against both regulations →
PROPOSAL — not yet enacted lawCOM(2025) 837

What the Digital Omnibus II proposal changes at the GDPR–AI Act intersection

COM(2025) 837 — the Digital Omnibus II — is a legislative proposal published in March 2025 that amends GDPR directly. It has not been enacted. If it becomes law, two of the five overlaps above change significantly:

837 Art 3 pt3

New GDPR Art 9(2)(k) — lawful basis for AI training data

837 proposes inserting a new GDPR Art 9(2)(k): processing of special categories of personal data is permitted where “processing [is] in the context of the development and operation of an AI system or AI model,” subject to conditions in a new Art 9(5).

New Art 9(5) conditions: you must implement measures to avoid collecting special categories in training data. If found despite those measures, you must remove them. If removal requires disproportionate effort, you must prevent the data from being used in outputs or disclosed to third parties.

Impact on Overlap 1 (training data): If enacted, this creates a dedicated GDPR lawful basis for AI development — eliminating today's gap where Art 10(5) of the AI Act permits the action but GDPR has no matching basis. Plan compliance for current law while this proposal proceeds.

837 Art 3 pt7

GDPR Art 22 amendment — contractual necessity clarified

837 replaces GDPR Art 22(1)(a) to read: an automated decision is lawful where it is necessary for entering into or performance of a contract “regardless of whether the decision could be taken otherwise than by solely automated means.”

Under current law, there is a compliance argument that “contractual necessity” does not justify automated processing if a human could have made the same decision. This has created legal uncertainty for credit scoring, insurance pricing, and fraud detection AI. The proposed amendment removes that uncertainty.

The proposal also adds a proportionality rule: where several equally effective automated processing solutions exist, the controller must use the less intrusive one.

Impact on Overlap 2 (automated decisions): If enacted, it would significantly reduce the GDPR burden on contractual automated decisions — particularly in financial services, lending, and insurance. The AI Act Art. 14 human oversight obligation is unchanged and still applies.

Plan compliance for current law. Monitor 837 progress at EUR-Lex — COM(2025) 837 final.

Common questions

Do I need to comply with both the EU AI Act and GDPR?▾
Yes, if your AI system processes personal data about EU individuals. The two laws cover different things: GDPR governs how you collect, process, and store personal data. The EU AI Act governs what your AI system does and how it is built. A credit scoring AI that processes applicant data triggers both. Compliance with one does not equal compliance with the other — they are enforced by different authorities and carry separate fine regimes.
If my AI system complies with GDPR Article 22, does that satisfy AI Act Article 14 human oversight?▾
No. GDPR Article 22 and EU AI Act Article 14 are separate obligations with separate purposes. GDPR Article 22 restricts when you can make a fully automated decision that significantly affects someone. AI Act Article 14 requires that high-risk AI systems be designed so an oversight person can understand, monitor, override, and halt the system. Even if you satisfy GDPR Article 22 via explicit consent, you still need to design your system to meet Article 14 if it falls into the Annex III high-risk list. Both obligations must be satisfied independently.
Does GDPR Article 22(3) replace the AI Act Article 86 right to explanation?▾
Not in every case. Article 86(3) of the EU AI Act says the right to explanation 'shall apply only to the extent that the right referred to in paragraph 1 is not otherwise provided for under Union law.' If GDPR Article 22(3) already gives a person a meaningful explanation right for a specific automated decision, Article 86 does not create a duplicate obligation. However, AI Act Article 86 has broader reach: it covers any Annex III high-risk system decision that adversely and significantly affects someone — including decisions that involve human review, which fall outside GDPR Article 22's 'solely automated' threshold. Where GDPR Article 22 does not apply, Article 86 fills the gap.
Can the same AI system attract fines under both GDPR and the EU AI Act?▾
Yes. The laws are enforced by different authorities. Your national data protection authority (DPA) enforces GDPR: fines up to €20,000,000 or 4% of global annual turnover under Article 83. Your national market surveillance authority enforces the EU AI Act: fines up to €35,000,000 or 7% under Article 99. The same AI product can attract both investigations simultaneously. There is no offset — a GDPR fine does not reduce your AI Act liability, and vice versa.
What GDPR lawful basis applies to training AI models on personal data?▾
Under current law, AI Act Article 10(5) permits processing special category personal data for bias detection in high-risk AI systems — but only under six strict conditions. Article 10(5) is not itself a GDPR lawful basis. You still need a separate basis under GDPR Article 9(2): typically scientific research (Art 9(2)(j)), substantial public interest (Art 9(2)(g)), or — for commercial AI — explicit consent (Art 9(2)(a)). COM(2025) 837 proposes adding a new GDPR Article 9(2)(k): a dedicated lawful basis for processing special categories in the context of AI system development and operation. This proposal is not yet enacted law.

No changes are proposed under COM(2025) 836 for this topic. COM(2025) 837 changes are covered above.

Related compliance guides

Fines & penalties explained (Art 99)8 banned AI practices (Art 5)All enforcement datesHuman oversight requirements (Art 14)Data governance for AI (Art 10)EU AI Act vs GDPR — side-by-side

Know exactly which obligations apply before a regulator tells you

The five overlaps on this page are a starting point. Whether they actually fire for your system depends on what your AI does, what data it processes, your role (provider or deployer), and which Annex III domain applies.

Describe your AI system in plain language. Regumatrix returns your risk tier, Annex classification, the exact EU AI Act obligations that apply, your fine exposure under Article 99, and the actions you need to take before August 2026. Eight sections, Article citations, about 30 seconds.

Analyse my system free — 3 checks included →All compliance guides

8-section report · Article citations · ~30 seconds · No credit card