Two EU regulations govern automated decisions that affect individuals. GDPR Article 22 (currently in force) controls when decisions can be based solely on automated processing. The EU AI Act (in force from August 2026) imposes human oversight obligations on any deployer using high-risk AI, regardless of automation level. COM(2025) 837 proposes to rewrite GDPR Art 22. Both regimes can apply to the same system simultaneously.
Under GDPR Art 22: if you make solely automated decisions with significant effects without a lawful basis, supervisory authorities can order you to stop and issue fines up to €20M or 4% of global turnover under GDPR Article 83(4)/(5).
Under AI Act Arts 14/26/86: failure to ensure human oversight or provide the right to explanation for a high-risk Annex III system carries penalties up to €15M or 3% of global turnover under Art 99(4).
Both penalties can be assessed simultaneously — different regulators, different instruments, same underlying system.
Not sure whether your AI system triggers both GDPR Art 22 and AI Act obligations?
Check your system in 30 seconds3 free analyses — no credit card required.
Article 22 of Regulation (EU) 2016/679 (GDPR) applies today to any solely automated decision that produces legal effects or similarly significantly affects the data subject.
The AI Act imposes distinct obligations that apply from August 2026 regardless of whether the decision is solely automated. These obligations are triggered by the use of a high-risk AI system — not by the automation level of the final decision.
Providers of high-risk AI systems must design and develop them so natural persons can effectively oversee the system during use. The system must enable the oversight person to: understand the system's capabilities and limitations, detect anomalies and dysfunctions, correctly interpret outputs, and decide not to use the output or override/reverse it. A physical stop function must be available.
Deployers of high-risk AI systems must assign human oversight to natural persons with the necessary competence, training and authority. This is not a passive right — it is an active organisational obligation. The deployer must verify the oversight person exists, has the right skills, and is genuinely empowered to act.
Any affected person subject to a decision by a deployer — based on a high-risk Annex III AI system (except §2) — that produces legal effects or significantly affects them adversely has the right to obtain from the deployer clear and meaningful explanations of: (1) the role of the AI system in the decision-making procedure, and (2) the main elements of the decision. This right applies regardless of whether the final decision was made by a human or by automated means.
| Aspect | GDPR Art 22 (current) | AI Act Art 86 (from Aug 2026) |
|---|---|---|
| Legal basis | Regulation (EU) 2016/679, Articles 22 | Regulation (EU) 2024/1689, Articles 14, 26, 86 |
| Who it protects | Data subjects (natural persons whose personal data is processed) | Affected persons subject to a decision taken by a deployer using Annex III AI |
| Trigger condition | Decision based SOLELY on automated processing + legal effect or similarly significant effect | Decision taken by deployer based on high-risk Annex III AI output + adverse impact on health, safety, or fundamental rights |
| Human involvement breaks it? | Yes — meaningful human review breaks 'solely automated' criterion (but rubber-stamping does not) | No — Art 86 applies where the deployer makes a decision using AI output, regardless of automation level |
| What the right gives | Right to human intervention, to express point of view, to contest the decision | Right to clear and meaningful explanation of the AI system's role and the main elements of the decision |
| Obligation on whom | Controller (the entity deciding the purpose and means of processing) | Deployer (the entity using the high-risk AI system in a professional context) |
| Exceptions | Contract necessity, legal authorisation, explicit consent — plus national law derogations | Union or national law exceptions (Art 86(2)); subsidiarity — only applies if right not already provided elsewhere (Art 86(3)) |
Article 3, point 7 of COM(2025) 837 replaces GDPR Article 22(1) and (2). This is the most significant change to automated decision-making rules since GDPR came into force.
Key change 1 — The “could have been taken otherwise” defence is removed
Under the proposed new Art 22(1)(a), a decision based solely on automated processing is lawful where it is necessary for entering into or performance of a contract between the data subject and the controller, regardless of whether the decision could be taken otherwise than by solely automated means. This removes a frequently invoked argument that challenged automated contract decisions on the grounds that a human could theoretically have made the same decision.
Key change 2 — Proportionality: use the less intrusive automated solution
New Art 22(2) adds: “When several equally effective automated processing solutions exist, the data controller shall use the less intrusive of such solutions.” This is a direct proportionality requirement applied to automated decision systems — if a less data-intensive approach achieves the same outcome, you must use it.
Key change 3 — Data subjects retain safeguard rights
The proposed Art 22(2) preserves the existing safeguards: for contract-lawful and consent-based automated decisions, controllers must still implement suitable measures to safeguard the data subject's rights — at least the right to obtain human intervention, to express their point of view, and to contest the decision.
Important: 837 GDPR changes do NOT affect AI Act obligations
Even if 837 is enacted and automated contract decisions become easier to justify under the updated GDPR Art 22, the EU AI Act's obligations under Art 14, Art 26, and Art 86 remain fully in effect. GDPR governs whether you can process data to automate the decision. The AI Act governs how the system must be designed, how humans must oversee it, and what explanation individuals can demand. Both apply simultaneously.
If your system processes personal data to produce an AI-driven output that a deployer uses to make a consequential decision about a natural person, assume both GDPR and the AI Act apply. Here is how to structure your compliance programme:
Classify under GDPR Art 22 first
Determine whether the decision is based solely on automated processing and has legal or similarly significant effects. If yes, identify your lawful basis (contract, legal authorisation, or consent) and implement the required safeguards (human intervention, objection, contest).
Classify the AI system under AI Act Annex III
Check whether the AI system falls within any of the 8 Annex III domains. If yes, the full high-risk compliance regime applies — including risk management (Art 9), data governance (Art 10), technical documentation (Art 11), human oversight design (Art 14), and conformity assessment (Art 43).
Implement Art 26 deployer obligations
Assign human oversight to a competent person (Art 26(2)). Maintain logs for at least 6 months (Art 26(6)). Monitor performance and report serious incidents (Art 26(5)). If your organisation is a public authority, complete the FRIA first (Art 27).
Enable Art 86 explanations
Prepare the information needed to answer Art 86 requests — the role of the AI system in the decision-making process, and the main elements of the decision. This information must be clear and meaningful to a non-technical person. The deployer is obligated to provide it; the provider should supply it in the technical documentation.
Conduct a DPIA where required
Article 26(9) explicitly states that deployers should use the Art 13 instructions from the provider to comply with GDPR Article 35 (DPIA). For automated scoring and profiling of individuals at scale, a DPIA under GDPR Art 35 will almost certainly be required.
You likely face overlap if your system does any of the following:
Right to Explanation of AI Decisions (Art 86)
Who has the right, what triggers it, and what deployers must provide.
Human Oversight (Article 14)
Design requirements, override controls, and who must be assigned oversight.
AI Deployer Obligations (Article 26)
The full checklist for deployers using high-risk AI systems.
COM 837 — GDPR & Data Law Changes
Full guide to all 13 proposed GDPR amendments under COM(2025) 837.
EU AI Act vs GDPR
Side-by-side comparison — scope, obligations, data rights, and sanctions.
Recruitment & HR AI
Automated hiring decisions, Annex III §4 obligations, and GDPR Art 22 interaction.
Regumatrix checks your AI system against EU AI Act Annex III high-risk categories, Arts 14, 26, and 86 obligations, and your fine exposure under Art 99. Cited report. ~30 seconds.
Analyse your AI system3 free analyses — no credit card required