RegumatrixBeta
GuidesPathfinderAI RightsFreeAbout
Sign inGet Started Free

Reference

  • All Articles
  • Official Text ↗

Compliance Guides

  • Compliance Timeline
  • High-Risk Checklist
  • Healthcare AI
  • HR & Recruitment
  • Financial Services
  • GPAI / Foundation Models
  • View all guides →

Product

  • Risk Pathfinder
  • AI Rights Check
  • Get Started Free
  • About
  • Feedback
  • Contact

Legal

  • Privacy Policy
  • Terms

Regumatrix — AI compliance powered by Regulation (EU) 2024/1689

This tool is informational only and does not constitute legal advice.

Grounded in Regulation (EU) 2024/1689 · verified 4 Apr 2026
HomeComplianceAutomated Decision-Making
GDPR Art 22 + AI Act Arts 14, 26, 86837 — GDPR Art 22 rewrittenPROPOSAL — not yet enacted law

Automated Decision-Making: GDPR Article 22 + EU AI Act

Two EU regulations govern automated decisions that affect individuals. GDPR Article 22 (currently in force) controls when decisions can be based solely on automated processing. The EU AI Act (in force from August 2026) imposes human oversight obligations on any deployer using high-risk AI, regardless of automation level. COM(2025) 837 proposes to rewrite GDPR Art 22. Both regimes can apply to the same system simultaneously.

What is at stake

Under GDPR Art 22: if you make solely automated decisions with significant effects without a lawful basis, supervisory authorities can order you to stop and issue fines up to €20M or 4% of global turnover under GDPR Article 83(4)/(5).

Under AI Act Arts 14/26/86: failure to ensure human oversight or provide the right to explanation for a high-risk Annex III system carries penalties up to €15M or 3% of global turnover under Art 99(4).

Both penalties can be assessed simultaneously — different regulators, different instruments, same underlying system.

Not sure whether your AI system triggers both GDPR Art 22 and AI Act obligations?

Check your system in 30 seconds

3 free analyses — no credit card required.

GDPR Article 22 — current rules (in force now)

Article 22 of Regulation (EU) 2016/679 (GDPR) applies today to any solely automated decision that produces legal effects or similarly significantly affects the data subject.

When GDPR Art 22 applies

  • A decision is based solely on automated processing (including profiling)
  • The decision produces legal effects (e.g. loan refusal, contract termination) or similarly significantly affects the person (e.g. affects access to services, prices, opportunities)
  • The data subject is a natural person

Current lawful bases for solely automated decisions

  • Contract necessity — the decision is necessary for entering into or performance of a contract with the data subject
  • Legal authorisation — authorised by Union or Member State law with appropriate safeguards
  • Explicit consent — the data subject has given explicit consent

Data subject rights that always apply (under contract or consent basis)

  • Right to obtain human intervention from the controller
  • Right to express their point of view about the decision
  • Right to contest the decision

EU AI Act — human oversight and right to explanation

The AI Act imposes distinct obligations that apply from August 2026 regardless of whether the decision is solely automated. These obligations are triggered by the use of a high-risk AI system — not by the automation level of the final decision.

Art 14Human oversight — provider obligation

Providers of high-risk AI systems must design and develop them so natural persons can effectively oversee the system during use. The system must enable the oversight person to: understand the system's capabilities and limitations, detect anomalies and dysfunctions, correctly interpret outputs, and decide not to use the output or override/reverse it. A physical stop function must be available.

Art 26Human oversight — deployer obligation

Deployers of high-risk AI systems must assign human oversight to natural persons with the necessary competence, training and authority. This is not a passive right — it is an active organisational obligation. The deployer must verify the oversight person exists, has the right skills, and is genuinely empowered to act.

Art 86Right to explanation — individual right from August 2026

Any affected person subject to a decision by a deployer — based on a high-risk Annex III AI system (except §2) — that produces legal effects or significantly affects them adversely has the right to obtain from the deployer clear and meaningful explanations of: (1) the role of the AI system in the decision-making procedure, and (2) the main elements of the decision. This right applies regardless of whether the final decision was made by a human or by automated means.

Side-by-side: GDPR Art 22 vs EU AI Act Art 86

AspectGDPR Art 22 (current)AI Act Art 86 (from Aug 2026)
Legal basisRegulation (EU) 2016/679, Articles 22Regulation (EU) 2024/1689, Articles 14, 26, 86
Who it protectsData subjects (natural persons whose personal data is processed)Affected persons subject to a decision taken by a deployer using Annex III AI
Trigger conditionDecision based SOLELY on automated processing + legal effect or similarly significant effectDecision taken by deployer based on high-risk Annex III AI output + adverse impact on health, safety, or fundamental rights
Human involvement breaks it?Yes — meaningful human review breaks 'solely automated' criterion (but rubber-stamping does not)No — Art 86 applies where the deployer makes a decision using AI output, regardless of automation level
What the right givesRight to human intervention, to express point of view, to contest the decisionRight to clear and meaningful explanation of the AI system's role and the main elements of the decision
Obligation on whomController (the entity deciding the purpose and means of processing)Deployer (the entity using the high-risk AI system in a professional context)
ExceptionsContract necessity, legal authorisation, explicit consent — plus national law derogationsUnion or national law exceptions (Art 86(2)); subsidiarity — only applies if right not already provided elsewhere (Art 86(3))

COM(2025) 837 — GDPR Article 22 rewritten

PROPOSAL — COM(2025) 837Not yet enacted law

Article 3, point 7 of COM(2025) 837 replaces GDPR Article 22(1) and (2). This is the most significant change to automated decision-making rules since GDPR came into force.

Key change 1 — The “could have been taken otherwise” defence is removed

Under the proposed new Art 22(1)(a), a decision based solely on automated processing is lawful where it is necessary for entering into or performance of a contract between the data subject and the controller, regardless of whether the decision could be taken otherwise than by solely automated means. This removes a frequently invoked argument that challenged automated contract decisions on the grounds that a human could theoretically have made the same decision.

Key change 2 — Proportionality: use the less intrusive automated solution

New Art 22(2) adds: “When several equally effective automated processing solutions exist, the data controller shall use the less intrusive of such solutions.” This is a direct proportionality requirement applied to automated decision systems — if a less data-intensive approach achieves the same outcome, you must use it.

Key change 3 — Data subjects retain safeguard rights

The proposed Art 22(2) preserves the existing safeguards: for contract-lawful and consent-based automated decisions, controllers must still implement suitable measures to safeguard the data subject's rights — at least the right to obtain human intervention, to express their point of view, and to contest the decision.

Important: 837 GDPR changes do NOT affect AI Act obligations

Even if 837 is enacted and automated contract decisions become easier to justify under the updated GDPR Art 22, the EU AI Act's obligations under Art 14, Art 26, and Art 86 remain fully in effect. GDPR governs whether you can process data to automate the decision. The AI Act governs how the system must be designed, how humans must oversee it, and what explanation individuals can demand. Both apply simultaneously.

When both regimes apply to the same system

If your system processes personal data to produce an AI-driven output that a deployer uses to make a consequential decision about a natural person, assume both GDPR and the AI Act apply. Here is how to structure your compliance programme:

1

Classify under GDPR Art 22 first

Determine whether the decision is based solely on automated processing and has legal or similarly significant effects. If yes, identify your lawful basis (contract, legal authorisation, or consent) and implement the required safeguards (human intervention, objection, contest).

2

Classify the AI system under AI Act Annex III

Check whether the AI system falls within any of the 8 Annex III domains. If yes, the full high-risk compliance regime applies — including risk management (Art 9), data governance (Art 10), technical documentation (Art 11), human oversight design (Art 14), and conformity assessment (Art 43).

3

Implement Art 26 deployer obligations

Assign human oversight to a competent person (Art 26(2)). Maintain logs for at least 6 months (Art 26(6)). Monitor performance and report serious incidents (Art 26(5)). If your organisation is a public authority, complete the FRIA first (Art 27).

4

Enable Art 86 explanations

Prepare the information needed to answer Art 86 requests — the role of the AI system in the decision-making process, and the main elements of the decision. This information must be clear and meaningful to a non-technical person. The deployer is obligated to provide it; the provider should supply it in the technical documentation.

5

Conduct a DPIA where required

Article 26(9) explicitly states that deployers should use the Art 13 instructions from the provider to comply with GDPR Article 35 (DPIA). For automated scoring and profiling of individuals at scale, a DPIA under GDPR Art 35 will almost certainly be required.

Check whether both regimes apply to your automated system

You likely face overlap if your system does any of the following:

  • Scores, ranks or classifies natural persons based on personal data
  • Produces recommendations that a human or automated process acts on without further review
  • Processes special category data (health, biometrics, ethnicity) to produce outputs
  • Makes or informs decisions about credit, insurance, employment or benefits
  • Is used by a deployer in an EU context regardless of where the provider is based
Check your AI system for dual-regime exposure

Frequently asked questions

What is the difference between GDPR Article 22 and EU AI Act Article 86?
GDPR Article 22 applies when a decision is based solely on automated processing, produces legal effects or similarly significantly affects the data subject, and is triggered regardless of whether AI is involved. The right is against the automated process. EU AI Act Article 86 applies specifically when a deployer makes a decision based on the output of a high-risk AI system listed in Annex III (except §2), and the decision adversely affects the individual's health, safety, or fundamental rights. Article 86 grants the right to a 'clear and meaningful explanation' from the deployer — not a right to object to the processing itself. Both can apply to the same decision, but they give different rights and impose different obligations.
What does COM(2025) 837 change about automated decision-making under GDPR?
837 (if enacted) rewrites GDPR Article 22(1) and (2). The key change is that a decision based solely on automated processing is lawful where it is 'necessary for entering into or performance of a contract — regardless of whether the decision could be taken otherwise than by solely automated means.' This removes the defence that humans could theoretically have made the same decision — a defence that had been used to challenge automated contract decisions. The new text also adds: where several equally effective automated processing solutions exist, the controller must use the less intrusive one. This is a direct proportionality test applied to automated decisions for contracts.
If GDPR Article 22 is updated by 837, do EU AI Act human oversight obligations still apply?
Yes — the two sets of obligations are independent. COM 837's amendment to GDPR Article 22 addresses the lawful basis for automated processing. It does not touch the EU AI Act's Article 14 (human oversight design requirement), Article 26 (deployer obligation to assign human oversight), or Article 86 (right to explanation). Even if 837 is enacted and an automated decision for a contract is fully lawful under the updated GDPR Art 22, the same system as a high-risk AI system under Annex III must still have human oversight measures built in by the provider (Art 14) and implemented by the deployer (Art 26), and affected individuals still have the Art 86 right to explanation.
Does GDPR Article 22 apply to high-risk AI systems that involve some human review?
It depends on how much the human actually influences the outcome. GDPR Article 22 applies to decisions 'based solely on automated processing.' If there is meaningful human review — where the human exercises genuine judgment and can override the AI output — the decision may not be 'solely automated' and Article 22 may not apply. However, EU supervisory authorities (including the EDPB) have taken a broad view: nominal human review where a human rubber-stamps AI output without real discretion is treated as automated for GDPR purposes. The EU AI Act's Article 14 human oversight requirement is separate — it requires that humans be able to effectively monitor, interpret, override and stop the AI system. Meeting Art 14 does not automatically mean the decision is not solely automated for GDPR purposes.
Which sectors are most affected by the GDPR Art 22 + AI Act overlap?
The overlap is most significant in credit scoring, insurance underwriting, employment decisions, and loan eligibility assessments — all of which involve Annex III high-risk AI systems (§4 employment, §5 essential services) that also process personal data and make decisions with legal or significant effects. Healthcare AI decisions (§5), social benefits eligibility (§5), and judicial AI (#8) are also within scope of both regimes. Any sector where a high-risk AI system is used by a deployer to take a decision that significantly affects individuals — and where personal data is processed to produce that output — should assume both GDPR Art 22 and EU AI Act Art 86 apply.

Related compliance guides

Right to Explanation of AI Decisions (Art 86)

Who has the right, what triggers it, and what deployers must provide.

Human Oversight (Article 14)

Design requirements, override controls, and who must be assigned oversight.

AI Deployer Obligations (Article 26)

The full checklist for deployers using high-risk AI systems.

COM 837 — GDPR & Data Law Changes

Full guide to all 13 proposed GDPR amendments under COM(2025) 837.

EU AI Act vs GDPR

Side-by-side comparison — scope, obligations, data rights, and sanctions.

Recruitment & HR AI

Automated hiring decisions, Annex III §4 obligations, and GDPR Art 22 interaction.

Map your automated decision system against both regimes

Regumatrix checks your AI system against EU AI Act Annex III high-risk categories, Arts 14, 26, and 86 obligations, and your fine exposure under Art 99. Cited report. ~30 seconds.

Analyse your AI system

3 free analyses — no credit card required