RegumatrixBeta
GuidesPathfinderAI RightsFreeAbout
Sign inGet Started Free

Reference

  • All Articles
  • Official Text ↗

Compliance Guides

  • Compliance Timeline
  • High-Risk Checklist
  • Healthcare AI
  • HR & Recruitment
  • Financial Services
  • GPAI / Foundation Models
  • View all guides →

Product

  • Risk Pathfinder
  • AI Rights Check
  • Get Started Free
  • About
  • Feedback
  • Contact

Legal

  • Privacy Policy
  • Terms

Regumatrix — AI compliance powered by Regulation (EU) 2024/1689

This tool is informational only and does not constitute legal advice.

Grounded in Regulation (EU) 2024/1689 · verified 4 Apr 2026
HomeComplianceLegalTech & Judiciary AI
Annex III §8 — Administration of Justice€15M / 3% high-risk penaltyArt 86 right to explanation applies

EU AI Act for LegalTech & Judiciary AI

AI systems that assist judges and arbitrators — by researching facts, interpreting law, or applying legal rules to specific case facts — are high-risk under Annex III §8 of the EU AI Act. Full Chapter III obligations apply: risk management, data governance, human oversight, and conformity assessment. Individuals affected by AI-assisted decisions have a right to explanation under Article 86.

High-risk classification: full Chapter III obligations from 2 August 2026

AI systems in the Annex III §8 scope carry the Art 99(4) penalty: up to €15 million or 3% of global annual turnover for violating high-risk obligations. The provider must complete conformity assessment before market placement. The deploying judicial authority or ADR body has separate deployer obligations.

Is your legal AI tool within Annex III §8 scope?

Regumatrix checks your system description against all 8 Annex III domains and Art 5 prohibitions. You get your risk tier, classification, exact obligations, and fine exposure — in about 30 seconds.

Check in 30 seconds — 3 free analyses

What is in scope: Annex III §8 — Administration of Justice

Annex III §8 has two sub-categories. Only §8(a) applies to LegalTech. §8(b) covers election interference AI — a separate topic.

Annex III §8(a) — HIGH-RISK

AI systems intended to be used by a judicial authority, or on their behalf, to assist in researching and interpreting facts and the law and in applying the law to a concrete set of facts.

This also covers AI used in alternative dispute resolution (ADR) in a similar way — for example AI assisting arbitrators or mediators in the same analytical functions.

In scope — typical examples

  • • AI assisting judges to identify relevant case law and statutes
  • • AI tools used by arbitrators to analyse factual evidence
  • • AI that applies legal rules to facts and suggests a legal conclusion
  • • AI used in court-managed ADR proceedings
  • • AI provided to judicial assistants acting under judicial authority

Outside §8(a) scope

  • • AI research tools used by private lawyers and law firms
  • • Contract drafting and review AI for corporate legal teams
  • • AI for law firm billing, matter management, or CRM
  • • Legal chatbots answering general public queries
  • • E-filing software and court case management systems (administrative only)

Key scoping test: The system must be intended for use by or on behalf of a judicial authority in the performance of a judicial or quasi-judicial function. The intended purpose at the time of design and placement on market determines classification — not the actual use in a specific case. A system sold as a general legal research tool that courts happen to use is not automatically within §8(a) scope, but a system marketed specifically for court use is.

Full high-risk obligation chain (provider perspective)

A provider of an Annex III §8(a) AI system must comply with the full Chapter III Section 2 obligation chain before placing the system on the market.

Art 9

Risk Management System

Document all foreseeable risks — including risks of inconsistent or incorrect legal analysis, risks of systematic bias in case law or statutory interpretation, and misuse by courts under time pressure. Updated continuously.

Art 10

Data Governance

Training and testing datasets (case law databases, statutory texts, judicial decisions) must be relevant, representative, and free from bias. Special scrutiny applies to historical judicial data that may encode past discrimination.

Art 11

Technical Documentation (Annex IV)

Detailed documentation covering the system's design, training data, testing methodology, intended purpose, performance metrics, and limitations. Must be available to market surveillance authorities.

Art 12

Record-Keeping & Logging

Automatic event logs must be generated. These are especially important for §8(a) systems — a judge or arbitrator must be able to demonstrate what the AI analysed and recommended in any given proceeding.

Art 13

Transparency & Instructions for Use

Clear instructions describing the system's capabilities, limitations, performance on specific domains of law, and known failure modes. Courts need to understand what the AI can and cannot reliably do.

Art 14

Human Oversight

The system must be designed so that the judicial authority remains in control. The judge or arbitrator must be able to understand, verify, and override the AI's analysis. The system cannot independently determine a legal outcome.

Art 15

Accuracy, Robustness & Cybersecurity

Must achieve declared accuracy levels for its stated legal domain. Must be resilient against manipulation. Particularly critical where the AI's analysis could influence decisions affecting fundamental rights.

Conformity assessment: self-assessment is available

Unlike biometric identification (Annex III §1) or law enforcement AI (Annex III §6), judicial AI under §8(a) does not require a mandatory notified body assessment. The provider can complete a self-assessment under Art 43(2) using the Annex VI procedure — provided the system is not also a product covered by Union harmonisation legislation requiring third-party assessment.

Self-assessment route (Annex VI):

  1. Verify compliance with harmonised standards or common specifications
  2. Prepare technical documentation (Annex IV)
  3. Complete internal control procedures
  4. Draw up EU Declaration of Conformity (Art 47)
  5. Affix CE marking (Art 48)
  6. Register the system in the EU AI database (Art 49)

Read the full conformity assessment guide →

Article 86 — Right to explanation of AI-assisted decisions

What Article 86 says

Art 86(1) gives any affected person subject to a decision—taken by the deployer on the basis of output from a high-risk Annex III AI system—the right to obtain clear and meaningful explanations of the AI's role in the decision-making procedure and the main elements of the decision. This right applies where the decision produces legal effects or similarly significantly affects the person adversely.

Who can invoke Art 86

Any natural person adversely affected by a decision that was taken using output from the §8(a) AI system. In a legal context: a party to proceedings where AI-assisted analysis contributed to a judgment or award.

Who must respond

The deployer — the judicial authority or ADR body that used the high-risk AI system. Not the LegalTech provider. The deployer must be able to explain what role the AI played and the main elements of the decision.

What "clear and meaningful" means

The explanation must cover the AI system's role in the procedure — was it used to find case law, interpret a statute, or suggest an outcome? — and the main elements of the actual decision taken. Not a technical description of the model, but a plain account of what the AI contributed.

Exceptions and limitations

Art 86(2) allows exceptions under Union or national law. Art 86(3) applies only where the right is not already provided for under other Union law. Member States may carve out the judicial branch from this obligation if national procedural law already provides equivalent protections.

Obligations for courts and ADR bodies as deployers

A court or ADR body that procures and uses an Annex III §8(a) AI system is a deployer under Art 26. Key deployer obligations:

Use in accordance with instructions for use

The AI must be used as the provider specifies. Courts should incorporate the provider's instructions into their AI governance policy.

Art 26(1)

Human oversight assignment

Assign competent, trained persons for oversight. In a judicial context, this means the judge or judicial officer must remain in control and be able to verify and override the AI's analysis.

Art 26(2)

Monitor operation and suspend if risk is identified

If the court considers the AI system may present a risk, it must suspend use and inform the provider and the relevant market surveillance authority without undue delay.

Art 26(5)

Retain automatic logs for at least 6 months

Automatically generated logs of AI system use must be kept for at least 6 months. These may be needed for procedural appeals or rights-based challenges.

Art 26(6)

Notify parties that AI is being used

Art 26(11) requires deployers of Annex III AI that make or assist in decisions about natural persons to inform those persons that they are subject to the use of a high-risk AI system.

Art 26(11)

No changes are proposed under COM(2025) 836 or COM(2025) 837 specifically for Annex III §8 judicial AI obligations. The general high-risk framework, including Art 14 human oversight and Art 86 right to explanation, remains unchanged in both proposals.

Key grey areas in LegalTech AI classification

Watch out — these scenarios commonly trigger unexpected Annex III §8 classification:

  • • AI sold as "legal research" but marketed specifically for court use
  • • AI tools used by court-appointed experts analysing cases on behalf of a court
  • • ADR platforms using AI to suggest settlement amounts or outcomes
  • • AI-assisted e-discovery tools deployed by courts (not private parties)
Verify your classification — free

Frequently asked questions

What legal AI systems are high-risk under the EU AI Act?
Annex III §8(a) covers AI systems intended to be used by a judicial authority, or on their behalf, to assist in researching and interpreting facts and the law and in applying the law to a specific set of facts. This also covers AI used in alternative dispute resolution (ADR) in a similar way. AI that directly influences how a judge or arbitrator analyses a case falls within this category. Note: Annex III §8(b) separately covers AI intended to influence election outcomes or voting behaviour — that is a different high-risk category.
Does a legal research AI tool used by lawyers (not judges) count as high-risk?
Not under Annex III §8(a), which is specifically limited to use 'by a judicial authority or on their behalf.' An AI research tool used by a private law firm to help lawyers draft arguments is not within §8(a) scope. However, if the tool is used by a court-appointed expert acting 'on behalf of' a judicial authority, it may come within scope. The key question is whether the system assists a person exercising judicial or quasi-judicial functions.
Do affected parties in legal proceedings have a right to explanation?
Yes. Article 86 gives any person subject to a decision taken by a deployer on the basis of output from a high-risk AI system (listed in Annex III) the right to obtain clear and meaningful explanations of the AI's role in the decision-making procedure and the main elements of the decision — if that decision produces legal effects or significantly affects them adversely. For Annex III §8(a) judicial AI, this means a party in proceedings could request an explanation of how the AI assisted the judge or arbitrator. Article 86(2) allows national law to provide exceptions or restrictions.
Do courts need to conduct a Fundamental Rights Impact Assessment (FRIA)?
Potentially yes, but there is an explicit carve-out in Article 27(1). FRIA is required for deployers that are bodies governed by public law deploying any Article 6(2) high-risk AI system — with the exception of 'high-risk AI systems intended to be used in the area listed in point 2 of Annex III' (critical infrastructure). Courts are public bodies and deploying §8(a) AI would ordinarily trigger FRIA. However, Member States may have specific rules for the judicial branch. Article 27(2) allows reuse of existing impact assessments from providers.
Does a contract analysis AI tool used by in-house legal teams need to comply with the EU AI Act?
A contract analysis AI used by an in-house legal team or a law firm is not high-risk under Annex III §8(a) — that category is limited to judicial authorities. However, the tool may still have obligations under Article 50 (transparency): if users interact with it as an AI system, providers may need to make clear they are interacting with AI. If the tool also makes credit, employment, or other decisions in other parts of the organisation, those specific outputs may fall under other Annex III categories. The EU AI Act follows intended purpose — assess each use case independently.

Related compliance guides

Right to Explanation of AI Decisions (Art 86)Human Oversight (Art 14)Conformity Assessment (Art 43)AI Deployer Obligations (Art 26)Fundamental Rights Impact Assessment (Art 27)Is My AI High-Risk? (Full Checklist)

Check your legal AI system in 30 seconds

Regumatrix analyses your AI system description against all 8 Annex III domains. You get your risk tier, exact classification, every obligation under Arts 9–15, conformity assessment route, fine exposure under Art 99, and whether Art 86 right to explanation applies — in a cited 8-section report.

Start free — no credit card

3 free analyses included