AI systems that assist judges and arbitrators — by researching facts, interpreting law, or applying legal rules to specific case facts — are high-risk under Annex III §8 of the EU AI Act. Full Chapter III obligations apply: risk management, data governance, human oversight, and conformity assessment. Individuals affected by AI-assisted decisions have a right to explanation under Article 86.
High-risk classification: full Chapter III obligations from 2 August 2026
AI systems in the Annex III §8 scope carry the Art 99(4) penalty: up to €15 million or 3% of global annual turnover for violating high-risk obligations. The provider must complete conformity assessment before market placement. The deploying judicial authority or ADR body has separate deployer obligations.
Is your legal AI tool within Annex III §8 scope?
Regumatrix checks your system description against all 8 Annex III domains and Art 5 prohibitions. You get your risk tier, classification, exact obligations, and fine exposure — in about 30 seconds.
Check in 30 seconds — 3 free analysesAnnex III §8 has two sub-categories. Only §8(a) applies to LegalTech. §8(b) covers election interference AI — a separate topic.
Annex III §8(a) — HIGH-RISK
AI systems intended to be used by a judicial authority, or on their behalf, to assist in researching and interpreting facts and the law and in applying the law to a concrete set of facts.
This also covers AI used in alternative dispute resolution (ADR) in a similar way — for example AI assisting arbitrators or mediators in the same analytical functions.
In scope — typical examples
Outside §8(a) scope
Key scoping test: The system must be intended for use by or on behalf of a judicial authority in the performance of a judicial or quasi-judicial function. The intended purpose at the time of design and placement on market determines classification — not the actual use in a specific case. A system sold as a general legal research tool that courts happen to use is not automatically within §8(a) scope, but a system marketed specifically for court use is.
A provider of an Annex III §8(a) AI system must comply with the full Chapter III Section 2 obligation chain before placing the system on the market.
Risk Management System
Document all foreseeable risks — including risks of inconsistent or incorrect legal analysis, risks of systematic bias in case law or statutory interpretation, and misuse by courts under time pressure. Updated continuously.
Data Governance
Training and testing datasets (case law databases, statutory texts, judicial decisions) must be relevant, representative, and free from bias. Special scrutiny applies to historical judicial data that may encode past discrimination.
Technical Documentation (Annex IV)
Detailed documentation covering the system's design, training data, testing methodology, intended purpose, performance metrics, and limitations. Must be available to market surveillance authorities.
Record-Keeping & Logging
Automatic event logs must be generated. These are especially important for §8(a) systems — a judge or arbitrator must be able to demonstrate what the AI analysed and recommended in any given proceeding.
Transparency & Instructions for Use
Clear instructions describing the system's capabilities, limitations, performance on specific domains of law, and known failure modes. Courts need to understand what the AI can and cannot reliably do.
Human Oversight
The system must be designed so that the judicial authority remains in control. The judge or arbitrator must be able to understand, verify, and override the AI's analysis. The system cannot independently determine a legal outcome.
Accuracy, Robustness & Cybersecurity
Must achieve declared accuracy levels for its stated legal domain. Must be resilient against manipulation. Particularly critical where the AI's analysis could influence decisions affecting fundamental rights.
Unlike biometric identification (Annex III §1) or law enforcement AI (Annex III §6), judicial AI under §8(a) does not require a mandatory notified body assessment. The provider can complete a self-assessment under Art 43(2) using the Annex VI procedure — provided the system is not also a product covered by Union harmonisation legislation requiring third-party assessment.
Self-assessment route (Annex VI):
What Article 86 says
Art 86(1) gives any affected person subject to a decision—taken by the deployer on the basis of output from a high-risk Annex III AI system—the right to obtain clear and meaningful explanations of the AI's role in the decision-making procedure and the main elements of the decision. This right applies where the decision produces legal effects or similarly significantly affects the person adversely.
Who can invoke Art 86
Any natural person adversely affected by a decision that was taken using output from the §8(a) AI system. In a legal context: a party to proceedings where AI-assisted analysis contributed to a judgment or award.
Who must respond
The deployer — the judicial authority or ADR body that used the high-risk AI system. Not the LegalTech provider. The deployer must be able to explain what role the AI played and the main elements of the decision.
What "clear and meaningful" means
The explanation must cover the AI system's role in the procedure — was it used to find case law, interpret a statute, or suggest an outcome? — and the main elements of the actual decision taken. Not a technical description of the model, but a plain account of what the AI contributed.
Exceptions and limitations
Art 86(2) allows exceptions under Union or national law. Art 86(3) applies only where the right is not already provided for under other Union law. Member States may carve out the judicial branch from this obligation if national procedural law already provides equivalent protections.
A court or ADR body that procures and uses an Annex III §8(a) AI system is a deployer under Art 26. Key deployer obligations:
Use in accordance with instructions for use
The AI must be used as the provider specifies. Courts should incorporate the provider's instructions into their AI governance policy.
Human oversight assignment
Assign competent, trained persons for oversight. In a judicial context, this means the judge or judicial officer must remain in control and be able to verify and override the AI's analysis.
Monitor operation and suspend if risk is identified
If the court considers the AI system may present a risk, it must suspend use and inform the provider and the relevant market surveillance authority without undue delay.
Retain automatic logs for at least 6 months
Automatically generated logs of AI system use must be kept for at least 6 months. These may be needed for procedural appeals or rights-based challenges.
Notify parties that AI is being used
Art 26(11) requires deployers of Annex III AI that make or assist in decisions about natural persons to inform those persons that they are subject to the use of a high-risk AI system.
No changes are proposed under COM(2025) 836 or COM(2025) 837 specifically for Annex III §8 judicial AI obligations. The general high-risk framework, including Art 14 human oversight and Art 86 right to explanation, remains unchanged in both proposals.
Key grey areas in LegalTech AI classification
Watch out — these scenarios commonly trigger unexpected Annex III §8 classification:
Regumatrix analyses your AI system description against all 8 Annex III domains. You get your risk tier, exact classification, every obligation under Arts 9–15, conformity assessment route, fine exposure under Art 99, and whether Art 86 right to explanation applies — in a cited 8-section report.
Start free — no credit card3 free analyses included