RegumatrixBeta
GuidesPathfinderAI RightsFreeAbout
Sign inGet Started Free

Reference

  • All Articles
  • Official Text ↗

Compliance Guides

  • Compliance Timeline
  • High-Risk Checklist
  • Healthcare AI
  • HR & Recruitment
  • Financial Services
  • GPAI / Foundation Models
  • View all guides →

Product

  • Risk Pathfinder
  • AI Rights Check
  • Get Started Free
  • About
  • Feedback
  • Contact

Legal

  • Privacy Policy
  • Terms

Regumatrix — AI compliance powered by Regulation (EU) 2024/1689

This tool is informational only and does not constitute legal advice.

Grounded in Regulation (EU) 2024/1689 · verified 4 Apr 2026
HomeComplianceHigh-Risk AI Checklist
9 Annex III domainsDeadline: 2 August 2026

EU AI Act: Is Your AI System High-Risk?

High-risk classification triggers the most demanding set of obligations in the EU AI Act. Use this guide to self-assess whether your system falls within Annex III or the Article 6(1) product safety track.

Article 99 penalty for high-risk AI providers and deployers

Placing a high-risk AI system on the market without meeting Articles 9–17 obligations, without conformity assessment, or without EU database registration carries a penalty of up to €30,000,000 or 6% of global annual turnover, whichever is higher. Compliance is mandatory from 2 August 2026.

Not sure which track applies to your AI system?

Describe what your system does and Regumatrix checks it against Article 6, Annex III, and the full regulation — in about 30 seconds. Your first 3 analyses are free.

Check my AI system — 3 free analyses included

Two paths to high-risk classification

Track 1

Article 6(1) — Product safety track

AI systems that are safety components of products already covered by EU harmonisation legislation (e.g. Machinery Regulation, Medical Device Regulation, Aviation safety rules) are automatically high-risk and must undergo the conformity assessment required by that sectoral legislation.

Track 2

Article 6(2) + Annex III — Standalone high-risk AI

AI systems listed in any of the 8 domains in Annex III are high-risk unless an exemption under Article 6(3) applies. All three conditions below must be met:

Your AI system falls within one of the 8 domains listed in Annex III, AND
It makes, assists in making, or materially influences a decision that affects people's access to opportunities, services, or rights, AND
It is not explicitly exempted under Article 6(3) (e.g. narrow safety components, profile checking against known data)

The 8 high-risk domains (Annex III)

1
Biometric identification and categorisation
Remote biometric ID, facial recognition in public spaces, emotion recognition
2
Critical infrastructure
AI managing electricity grids, water supply, road traffic, or digital infrastructure
3
Education and vocational training
AI that determines access to education, evaluates students, or informs teacher assessments
4
Employment and workers management
CV screening, candidate ranking, performance monitoring, promotion/demotion AI
5a
Access to essential private services
Credit scoring, insurance pricing, mortgage approval
5b
Access to essential public services
Benefits entitlement, social services allocation, tax assessment AI
6
Law enforcement
Individual risk assessment, polygraph AI, crime analytics, evidence evaluation
7
Migration, asylum and border control
Asylum assessment, visa processing AI, border screening
8
Administration of justice and democracy
AI assisting courts, dispute resolution, election integrity tools

Read the full list at Annex III and the classification rules at Article 6.

Article 6(3) exemptions

Even if your system falls in an Annex III domain, it may be exempt if: it performs a narrow procedural task, does not influence human decisions, detects patterns in existing data for human review only, or is intended for preparatory tasks only. Document your exemption reasoning carefully.

Could your AI be high-risk without you knowing?

Many teams correctly identify that their AI falls within an Annex III domain, but then assume it is exempt under Article 6(3). These exemptions are narrow and must be formally documented. Check if any of these apply to your system:

  • Your system influences decisions about access to services or employment even if a human approves
  • Your AI is technically a "recommendation engine" but the recommendations are rarely overridden
  • You built your AI for one use case but deployers are using it in higher-risk contexts
  • Your system processes biometric data for any purpose, even as a secondary function
  • You haven't formally documented your Article 6(3) exemption reasoning in writing
Check my AI system against Annex III →

Frequently asked questions

What makes an AI system high-risk under the EU AI Act?▾
An AI system is high-risk if it is listed in Annex III of the EU AI Act — covering 8 domains including biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, and justice. An AI system is also high-risk if it is a safety component of a product covered by existing EU harmonisation legislation (Article 6(1)).
When do high-risk AI obligations apply?▾
Most high-risk AI system requirements — including risk management (Article 9), technical documentation (Article 11), logging (Article 12), human oversight (Article 14), and registration in the EU database — apply from 2 August 2026. AI systems in products covered by Annex I product safety law (e.g. medical devices) have until 2 August 2027.
Is emotion recognition AI always high-risk?▾
Emotion recognition AI is classified as high-risk or prohibited depending on the context. Using it in employment or education contexts triggers high-risk requirements under Annex III. Using real-time remote biometric ID in public spaces is generally prohibited under Article 5, with narrow law enforcement exceptions.
Does the Digital Omnibus Package remove high-risk obligations for SMEs?▾
The EU Commission's 2025 Digital Omnibus proposal would reduce some obligations for SMEs and startups — including lighter conformity assessment procedures and simplified documentation. However, the proposal is not yet law and the August 2026 deadline remains in effect under the current regulation.

Related compliance guides

Healthcare AI obligationsHR & Recruitment AI guideFinancial services AI guideGPAI / foundation model rulesFines & penalties (Article 99)All enforcement deadlines

Know exactly where your system stands before August 2026

High-risk classification is not just a label — it triggers risk management, technical documentation, logging, human oversight, conformity assessment, CE marking, and EU database registration. Getting the classification wrong in either direction is a problem.

Describe your AI system in plain language. Regumatrix checks it against every article of the EU AI Act and returns your risk tier, Annex classification, the exact obligations that apply, and your fine exposure under Article 99. Eight sections. About 30 seconds.

Check your AI system free →All compliance guides

8-section report · Article citations · ~30 seconds · No credit card