RegumatrixBeta
GuidesPathfinderAI RightsFreeAbout
Sign inGet Started Free

Reference

  • All Articles
  • Official Text ↗

Compliance Guides

  • Compliance Timeline
  • High-Risk Checklist
  • Healthcare AI
  • HR & Recruitment
  • Financial Services
  • GPAI / Foundation Models
  • View all guides →

Product

  • Risk Pathfinder
  • AI Rights Check
  • Get Started Free
  • About
  • Feedback
  • Contact

Legal

  • Privacy Policy
  • Terms

Regumatrix — AI compliance powered by Regulation (EU) 2024/1689

This tool is informational only and does not constitute legal advice.

Grounded in Regulation (EU) 2024/1689 · verified 4 Apr 2026
HomeComplianceRight to Explanation
Individual rightArt 86 — applies from August 2026837 — GDPR Art 22 overlap

Right to Explanation of AI Decisions (Article 86): What Individuals Can Ask For

Article 86 creates a new right — separate from GDPR — for any affected person to receive a clear and meaningful explanation when a deployer uses a high-risk AI system to make a decision that adversely affects their health, safety, or fundamental rights.

A right that flows from what deployers do with AI

Article 86 of Regulation (EU) 2024/1689 is directed at deployers — the organisations and professionals who operate high-risk AI systems in the real world. Deployers are already required by Art 26 to ensure human oversight, monitor performance, and follow usage instructions. Article 86 adds a corresponding right for the people affected by deployers' AI-assisted decisions: the right to know how the AI shaped the outcome.

Does your deployment of high-risk AI trigger Article 86 obligations? Regumatrix maps your deployer obligations including Art 86 readiness →

The right — exact text of Article 86(1)

“Any affected person subject to a decision which is taken by the deployer on the basis of the output from a high-risk AI system listed in Annex III, with the exception of systems listed under point 2 thereof, and which produces legal effects or similarly significantly affects that person in a way that they consider to have an adverse impact on their health, safety or fundamental rights shall have the right to obtain from the deployer clear and meaningful explanations of the role of the AI system in the decision-making procedure and the main elements of the decision taken.”

— Article 86(1), Regulation (EU) 2024/1689 (EU AI Act)

Four conditions — all must be met

The Article 86 right is triggered only when all four conditions are satisfied simultaneously.

1

Decision taken by the deployer

The right applies where the deployer makes a decision. The deployer is the entity using the AI system in a professional context. If the output is an input to a purely human decision that the AI does not materially shape, the right may not be triggered.

2

Based on the output of a high-risk Annex III AI system

The AI system must be listed in Annex III. The right does not apply to non-high-risk AI systems or to high-risk systems in Annex I (product-safety AI). Annex III point 2 (law enforcement biometric identification) is explicitly excluded.

3

Produces legal effects or similarly significant effects

The decision must produce legal effects (e.g., denial of a benefit, termination of employment, refusal of credit) or effects similarly significant in practice — such as a decision that substantially affects how a person is treated, their access to opportunities, or their assessment under a public authority.

4

Adverse impact on health, safety, or fundamental rights

The affected person must consider the impact to be adverse. The subjective element is important — the person believes the decision negatively affects them. Health, safety, and fundamental rights are interpreted broadly consistent with the EU Charter of Fundamental Rights.

What the explanation must cover

1. Role of the AI system in the decision-making procedure

The deployer must explain what the AI system contributed to the decision-making process — what it assessed, what it scored or classified, what it recommended, and how that AI output fed into the final decision. A description of the AI system in general terms is insufficient; the explanation must relate to the specific decision made about the affected person.

2. Main elements of the decision taken

The deployer must also explain the main elements of the decision itself — the factors that principally drove the outcome. This goes beyond just describing the AI system; it includes the substantive reasoning behind the result. Together with the AI system's role, this gives the affected person enough information to understand and, if necessary, challenge the decision.

Standard: “clear and meaningful”

The explanation must be both clear (accessible, not written purely in technical terms) and meaningful (connected to the actual decision, not generic boilerplate). A deployer cannot satisfy Article 86 by providing a standard-issue AI system description that is identical for every person. The explanation must be specific enough for the affected person to understand why their outcome was what it was.

Art 26

The obligation falls on the deployer, not the AI provider

Article 86 imposes the explanation obligation on the deployer — the entity using the AI system in a specific context to make decisions about persons. The AI provider's obligations under Chapter III do not include a direct obligation to respond to affected persons' explanation requests. Deployers at risk of triggering Article 86 should therefore ensure, when procuring or deploying a high-risk Annex III AI system, that they can obtain from the provider sufficient information about the system's operation to construct a meaningful explanation. Article 26(1) already requires deployers to follow usage instructions and ensure human oversight; Article 86 builds on this by requiring deployers to be able to articulate the AI system's role to the person affected.

Exceptions — when the right does not apply

Art 86(2)

Union or national law exceptions

Article 86(2) allows exceptions or restrictions to the right where Union or national law — in compliance with Union law — provides for them. This primarily covers law enforcement and national security contexts, where disclosing how AI shaped a decision could compromise an investigation or endanger safety. Exceptions must be grounded in law and must comply with the general principle of proportionality.

Art 86(3)

Subsidiarity — where another Union law already provides the right

Article 86(3) establishes that Article 86 applies only to the extent that the right is not otherwise provided under Union law. If GDPR Article 22 or another regulation already gives an equivalent explanation right in the specific situation, Article 86 does not create a separate additional obligation. However, Article 86 and GDPR Article 22 have different scopes (see below), so both may apply simultaneously in many scenarios.

Relationship with GDPR Article 22

GDPR Article 22

  • •Applies to solely automated decisions
  • •Right not to be subject to purely automated processing
  • •Requires lawful basis (contract necessity, consent, or important public interest)
  • •Includes right to human review and to contest the decision
  • •Applied by data controller regardless of AI classification

EU AI Act Article 86

  • •Applies where a deployer uses high-risk AI output in a decision
  • •Right to explanation of AI's role and main decision elements
  • •No explicit lawful basis requirement — right exists because system is Annex III high-risk
  • •Explanation only — does not include right to contest or human review (covered by Art 26 oversight)
  • •Applied by deployer; applies even where human was involved in the decision

Both rights may apply to the same situation. A deployer using a high-risk credit scoring AI to make a solely automated lending decision may be subject to both GDPR Art 22 (requiring a lawful basis for automated processing and right to human review) and EU AI Act Art 86 (requiring a clear explanation of the AI system's role). Neither right displaces the other.

PROPOSAL — COM(2025) 837Not yet enacted law

837 Digital Omnibus amends GDPR Art 22 — making Art 86 more relevant

COM(2025) 837 proposes to clarify GDPR Article 22 to state that automated decisions can be based on contractual necessity regardless of whether the decision could be taken otherwise than by solely automated means. If enacted, this would resolve a longstanding ambiguity: previously, it was argued that a solely automated decision could only be permitted under GDPR Art 22(2)(a) if a human alternative was genuinely impractical. The 837 amendment would remove that constraint.

The practical effect: deployers could rely more broadly on automated AI-based decisions under GDPR Art 22(2)(a) contract necessity. This increases the volume of decisions where Article 86 of the EU AI Act applies — making the explanation right more valuable and more frequently triggered. Deployers adopting more automated decision-making under the 837 clarification should correspondingly strengthen their Article 86 explanation infrastructure. See the full 837 overview →

Frequently asked questions

Who holds the Article 86 right to explanation?

Article 86(1) grants the right to any 'affected person' — a person who is subject to a decision taken by the deployer based on the output of a high-risk AI system listed in Annex III (except point 2). The right is held by the individual affected, not by a company or organisation. It applies when the decision produces legal effects or similarly significantly affects the person in a way they consider to have an adverse impact on their health, safety, or fundamental rights. There is no residency or citizenship requirement — any person covered by the decision qualifies.

What must a deployer explain under Article 86?

The deployer must provide 'clear and meaningful explanations' covering: (1) the role of the AI system in the decision-making procedure, and (2) the main elements of the decision taken. The explanation must be clear and meaningful — it cannot consist of opaque technical descriptions that the affected person cannot understand. The obligation falls on the deployer, not the AI provider or developer. The deployer's existing obligation under Article 26 to monitor performance, maintain human oversight, and follow usage instructions for high-risk AI systems provides the foundation for being able to produce these explanations.

Which AI systems are covered by Article 86?

Article 86 covers high-risk AI systems listed in Annex III — with one exception. Systems listed under point 2 of Annex III (AI systems used in real-time and post-remote biometric identification for law enforcement) are excluded. The covered Annex III categories include AI used in: critical infrastructure management, education, employment and worker management, access to essential services (credit, insurance, healthcare), law enforcement (excluding biometric ID in point 2), migration and asylum management, and administration of justice. The AI system must be used by a deployer to take a decision — it is not enough that the AI system was used to generate information that a human later acted on independently.

Is the Article 86 right to explanation the same as the GDPR Article 22 right?

No — they are separate rights with different triggers and scope. GDPR Article 22 applies to solely automated decisions producing legal effects or similarly significantly affecting the person, and gives rights against automated processing itself. Article 86 of the EU AI Act applies where a deployer makes a decision based on the output of a high-risk AI system — regardless of whether the decision-making process was fully automated. The GDPR right focuses on the automated process; the AI Act right focuses on the AI system's role in a decision where a deployer was involved. Article 86(3) clarifies that the Article 86 right applies only where it is not otherwise already provided under Union law — meaning GDPR Art 22 and Art 86 complement rather than replace each other.

Are there exceptions to the Article 86 right to explanation?

Yes. Article 86(2) provides that the right does not apply where Union or national law creates exceptions or restrictions to it in compliance with Union law. This allows Member States and EU legislators to carve out limited exceptions — for example, in law enforcement or national security contexts. Additionally, Article 86(3) establishes a subsidiarity principle: Article 86 only applies to the extent that the right is not already provided elsewhere under Union law. If GDPR Article 22 or another Union instrument already provides an equivalent right for the specific situation, Article 86 does not add a further obligation.

Related guides

AI Deployer Obligations Guide

Art 26 — full deployer obligation stack, monitoring, human oversight, instructions for use

High-Risk AI Checklist

Which Annex III categories are covered — classification determines whether Art 86 applies

Human Oversight Requirements

Art 14 — the Art 26 obligation that pairs with Art 86's explanation right

AI Transparency Obligations (Art 50)

Art 50 — chatbot disclosure and other transparency obligations separate from Art 86

Market Surveillance & Enforcement

Arts 74–85 — how authorities enforce the AI Act and how to lodge complaints (Art 85)

COM(2025) 837 — Digital Omnibus II

837 GDPR Art 22 amendment and what it means for automated decision-making

Be ready to explain every AI-assisted decision you make

Regumatrix maps your deployer obligations — including which of your AI deployments trigger Article 86 explanation obligations — so you can build the documentation and explanation processes you need before August 2026.

Get started free