RegumatrixBeta
GuidesPathfinderAI RightsFreeAbout
Sign inGet Started Free

Reference

  • All Articles
  • Official Text ↗

Compliance Guides

  • Compliance Timeline
  • High-Risk Checklist
  • Healthcare AI
  • HR & Recruitment
  • Financial Services
  • GPAI / Foundation Models
  • View all guides →

Product

  • Risk Pathfinder
  • AI Rights Check
  • Get Started Free
  • About
  • Feedback
  • Contact

Legal

  • Privacy Policy
  • Terms

Regumatrix — AI compliance powered by Regulation (EU) 2024/1689

This tool is informational only and does not constitute legal advice.

Grounded in Regulation (EU) 2024/1689 · verified 4 Apr 2026
  1. Home
  2. /
  3. Compliance
  4. /
  5. Fundamental Rights & AI
Articles 27, 77, 85–87836 — Art 77 strengthened

Fundamental Rights & AI: FRIA, Authority Powers, and Individual Rights

The EU AI Act builds a layered fundamental rights framework around high-risk AI systems. Article 27 requires certain deployers to assess fundamental rights impacts before deployment. Article 77 gives fundamental rights authorities powers to inspect and test those systems. Article 85 lets anyone complain to a market surveillance authority. Article 86 gives individuals the right to a meaningful explanation of adverse AI decisions. COM(2025) 836 strengthens Article 77 significantly by adding a mandatory cooperation obligation between rights authorities and market surveillance authorities.

Key facts before reading this guide

  • ▸The FRIA (Art 27) applies only to specific deployers — public bodies, public-service providers, and credit/insurance deployers — not all high-risk AI deployers
  • ▸A separate more detailed guide covers the FRIA procedure: Fundamental Rights Impact Assessment (FRIA)
  • ▸The Art 86 right to explanation applies to Annex III high-risk AI decisions — it does not override or duplicate the GDPR Art 22 right to human review of solely automated decisions; Art 86(3) says it only applies where not already provided for under Union law

Is your AI system subject to the FRIA and Art 86 rights?

Regumatrix classifies your AI system against Annex III, determines whether you are a FRIA-obligated deployer, and maps which fundamental rights obligations apply to your specific deployment context.

Check my AI obligations — 3 free analyses

Fundamental Rights Impact Assessment — Article 27

Before deploying a high-risk AI system listed in Article 6(2) (Annex III systems), certain deployers must assess the impact the system may produce on fundamental rights. The assessment is not a one-time tick-box exercise — it must be updated if circumstances change.

Who must carry out a FRIA (Art 27(1))

  • ▸Bodies governed by public law — government departments, agencies, local authorities, universities, publicly funded bodies — deploying any Annex III system (except §2 critical infrastructure)
  • ▸Private entities providing public services — contracted service providers delivering government functions, social services, utilities — deploying any Annex III system (except §2)
  • ▸Credit/insurance deployers — any deployer of Annex III §5(b) creditworthiness AI or §5(c) life/health insurance risk assessment AI, regardless of whether they are a public body

What the FRIA must cover — Art 27(1)(a)–(f)

  • (a)Description of the deployer's processes in which the AI system will be used
  • (b)Period of time and frequency with which the system will be used
  • (c)Categories of natural persons and groups likely to be affected
  • (d)Specific risks of harm to identified persons or groups, using the provider's Article 13 transparency information
  • (e)Description of human oversight measures and how they will be implemented
  • (f)Measures to be taken if identified risks materialise — including internal governance and complaint mechanisms

Timing, reuse, and notification — Art 27(2)–(3)

The FRIA obligation applies to the first use of a high-risk AI system. For similar subsequent uses, the deployer may rely on previously conducted FRIAs or existing assessments carried out by the provider. Once completed, the deployer must notify the market surveillance authority of the results by submitting the AI Office questionnaire template. If any FRIA element changes or becomes outdated during use, the deployer must update the assessment and re-notify.

Relationship to GDPR DPIA — Art 27(4)

If the FRIA obligations are already met through a GDPR Article 35 Data Protection Impact Assessment or a Law Enforcement Directive Article 27 assessment, the FRIA must complement — not duplicate — that assessment. In practice, a combined DPIA/FRIA is both permitted and encouraged. The AI Office's questionnaire template (Art 27(5)) is designed to facilitate this.

Powers of fundamental rights authorities — Article 77

National public authorities responsible for enforcing fundamental rights obligations — including equality bodies, data protection authorities, and human rights commissions — have specific access and investigation powers with respect to high-risk AI systems.

Access to documentation (Art 77(1))

National public authorities that supervise or enforce obligations under Union law protecting fundamental rights — including the right to non-discrimination — in relation to the use of Annex III high-risk AI systems have the power to request and access any documentation created or maintained under the AI Act. Documentation must be provided in accessible language and format. The requesting authority must inform the market surveillance authority of the request.

Technical testing (Art 77(3))

Where available documentation is insufficient to determine whether an infringement of fundamental rights obligations has occurred, the rights authority may make a reasoned request to the market surveillance authority to organise technical testing of the AI system. The MSA must organise such testing with close involvement of the requesting authority within a reasonable time. This is a significant power: rights bodies can effectively force technical audits of deployed AI systems.

Who qualifies as a fundamental rights authority (Art 77(2))

Each Member State must identify and publish a list of the public authorities or bodies with Art 77 powers. This list was due by 2 November 2024. Notify the Commission and other Member States; keep the list updated. Equality commissions, national human rights institutions, data protection authorities, and anti-discrimination bodies are all candidates depending on Member State designation.

Right to lodge a complaint — Article 85

Any natural or legal person — including individuals, civil society organisations, and companies — who has grounds to consider that a person or organisation has infringed any provision of the AI Act may submit a complaint to the relevant market surveillance authority. There is no standing requirement beyond "grounds to consider" an infringement exists — meaning NGOs, competitors, and affected individuals can all file. Complaints feed into market surveillance activities and are handled via the procedures established by each Member State's market surveillance authority. The right is without prejudice to other administrative or judicial remedies — complainants can also pursue court proceedings or DPA complaints separately.

Right to explanation of AI decisions — Article 86

Article 86 creates a direct individual right against deployers. It applies to adverse decisions made on the basis of high-risk AI system outputs — not to all AI interactions.

When the right applies — Art 86(1)

  • ▸The person is subject to a decision taken by the deployer on the basis of a high-risk AI system output — not just an AI recommendation that humans ignored
  • ▸The system is in Annex III — except §2 critical infrastructure systems
  • ▸The decision produces legal effects or similarly significantly affects the person
  • ▸The person considers they have an adverse impact on their health, safety or fundamental rights

What the explanation must cover

Deployers must provide: (1) the role of the AI system in the decision-making procedure — what function did the AI perform, what was its weight in the outcome; and (2) the main elements of the decision taken — what factors led to the conclusion. The explanation must be "clear and meaningful" — legal boilerplate does not satisfy this standard.

Practical examples

  • ▸Job application rejected — HR AI system scored candidate below threshold (Annex III §4)
  • ▸Loan application denied — creditworthiness AI flagged high risk (Annex III §5(b))
  • ▸Benefits claim refused — public authority welfare AI determined ineligibility (Annex III §5(a))
  • ▸University admission refused — admissions AI ranked candidate too low (Annex III §3)

Whistleblower protection for AI Act infringements — Article 87

Article 87 applies the EU Whistleblower Protection Directive (Directive (EU) 2019/1937) to reports of AI Act infringements. Any person who reports a genuine AI Act violation is protected under that Directive's framework: protection from retaliation, confidential reporting channels, and legal support obligations. This applies whether the report is made internally (to the organisation's compliance function), to a regulatory authority, or — in limited circumstances — publicly. For AI teams: ensure your internal speak-up processes are set up to receive AI Act complaints, not just traditional whistleblowing topics.

PROPOSAL — not yet enacted lawCOM(2025) 836 — Digital Omnibus: Article 77 amendments

The Digital Omnibus proposal clarifies and strengthens the Article 77 mechanism in three ways, responding to practical barriers that emerged in the first year of application.

Updated heading

Article 77's heading changes from "Powers of authorities protecting fundamental rights" to "Powers of authorities protecting fundamental rights and cooperation with market surveillance authorities" — signalling that cooperation is now an equal element of this provision.

New Art 77(1a) — Market surveillance authority must grant access

Under the existing text, fundamental rights authorities canrequest access to documentation. Under 836, the market surveillance authority is obligated to grant access and can request required information or documentation from providers or deployers where necessary. This converts a permissive power into a duty, closing gaps where MSAs previously declined to share.

New Art 77(1b) — Mutual cooperation obligation

A new mutual cooperation duty is inserted: market surveillance authorities and fundamental rights bodies shall cooperate closely and provide each other with mutual assistance necessary for fulfilling their respective mandates. This specifically includes exchange of information necessary for effective supervision or enforcement of both the AI Act and the relevant Union legislation protecting fundamental rights. In practice this closes the gap between AI Act enforcement and anti-discrimination enforcement.

No changes are proposed under COM(2025) 837 for Articles 27, 77, 85, 86, or 87.

Frequently Asked Questions

Who must carry out a Fundamental Rights Impact Assessment (FRIA) under Article 27?+

The FRIA obligation under Article 27 applies to deployers of high-risk AI systems listed in Article 6(2) — Annex III systems — with one exception: systems used in the area of critical infrastructure (Annex III §2) are excluded from the FRIA requirement. The obligation applies to two categories of deployer: (1) bodies governed by public law, or private entities providing public services; and (2) deployers of high-risk AI systems in the areas of creditworthiness assessment (Annex III §5(b)) and life/health insurance risk assessment (Annex III §5(c)). Pure private-sector deployers of most Annex III systems — such as HR AI, education AI, or border control AI — are not required to carry out a FRIA, but public bodies and public-service providers deploying the same systems are.

Can a FRIA be combined with a GDPR Data Protection Impact Assessment?+

Yes, and this is explicitly encouraged. Article 27(4) states that if a DPIA conducted under GDPR Article 35 or Law Enforcement Directive Article 27 already covers the same obligations as the FRIA, the FRIA must complement — not duplicate — that assessment. In practice, organisations already conducting DPIAs for their high-risk AI systems can integrate the FRIA elements into their DPIA process, covering the additional six elements required by Article 27(1): the description of processes, deployment frequency, affected groups, harm risks, human oversight measures, and risk mitigation arrangements. The AI Office is required to develop a questionnaire template to facilitate this.

What can fundamental rights authorities demand from AI providers under Article 77?+

Under current Article 77, national public authorities responsible for enforcing fundamental rights obligations (such as national equality bodies, data protection authorities, and human rights institutions) can request access to any documentation created or maintained under the AI Act for high-risk AI systems listed in Annex III. Under COM(2025) 836, this access route is strengthened: requests go to the market surveillance authority, which is obliged to grant access and can compel providers or deployers to provide required information. A new mutual cooperation obligation is also introduced, requiring market surveillance authorities and fundamental rights bodies to exchange information and assist each other in enforcement.

Who has the right to explanation of an AI decision under Article 86?+

Any affected person subject to a deployer's decision that: (1) is based on the output of a high-risk AI system listed in Annex III (except critical infrastructure systems under §2); and (2) produces legal effects or similarly significantly affects that person; and (3) that person considers to have an adverse impact on their health, safety or fundamental rights — can request a clear and meaningful explanation covering: the role of the AI system in the decision-making procedure, and the main elements of the decision taken. This is a deployer obligation: the affected person goes to the deployer, not the AI provider. The right does not apply where it would be excluded by Union or national law, or where it is already provided for under another Union law provision.

What is the difference between lodging a complaint under Article 85 and requesting an explanation under Article 86?+

Article 85 complaints go to market surveillance authorities — they are regulatory complaints alleging that an AI Act obligation has been violated. They trigger the market surveillance authority's investigation powers and can result in enforcement action against providers or deployers. Article 86 explanations are direct individual rights against the deployer — the affected person asks the deployer to explain the role of AI in a specific adverse decision. Article 86 is a substantive right to information, not an enforcement mechanism. Both rights can be exercised simultaneously: a person who receives an unexplained AI decision can both request an explanation from the deployer (Art 86) and file a regulatory complaint if they believe an AI Act obligation was breached (Art 85).

Related Compliance Guides

FRIA Procedure — Step-by-Step Guide

How to carry out and document the six-element fundamental rights impact assessment under Article 27.

AI Deployer Obligations (Article 26)

Full deployer obligation chain — human oversight, logging, worker notification, FRIA, and Art 86.

Right to Explanation (Article 86) — Deep Dive

What 'clear and meaningful explanation' means in practice and how deployers should respond to requests.

Is My AI High-Risk? — Annex III Classification

FRIA, Art 77, and Art 86 all apply to Annex III systems — confirm your classification first.

EU AI Act Market Surveillance & Enforcement

How market surveillance authorities use Art 85 complaints, their investigation powers, and how to file.

Public Sector AI — FRIA & Deployer Obligations

Public authority deployers face the broadest FRIA obligations under Article 27.

Map your fundamental rights obligations in 30 seconds

Regumatrix identifies whether you are a FRIA-obligated deployer, confirms which Art 86 explanation duties apply to your deployment context, and generates a compliance checklist covering all five fundamental rights provisions.

Start free analysis