RegumatrixBeta
GuidesPathfinderAI RightsFreeAbout
Sign inGet Started Free

Reference

  • All Articles
  • Official Text ↗

Compliance Guides

  • Compliance Timeline
  • High-Risk Checklist
  • Healthcare AI
  • HR & Recruitment
  • Financial Services
  • GPAI / Foundation Models
  • View all guides →

Product

  • Risk Pathfinder
  • AI Rights Check
  • Get Started Free
  • About
  • Feedback
  • Contact

Legal

  • Privacy Policy
  • Terms

Regumatrix — AI compliance powered by Regulation (EU) 2024/1689

This tool is informational only and does not constitute legal advice.

Grounded in Regulation (EU) 2024/1689 · verified 4 Apr 2026
HomeComplianceLaw Enforcement AI
Annex III §6 — Law Enforcement2 practices banned — €35M / 7%Notified body mandatory

EU AI Act for Law Enforcement AI

Law enforcement AI sits at the intersection of the EU AI Act's two harshest regimes — outright prohibition and the full high-risk compliance track. Two practices are banned entirely. Five more are high-risk under Annex III §6 and require a mandatory notified body assessment. This guide explains what is banned, what is regulated, and what deployers must do before using any of it.

Penalty exposure

Prohibited practice violations (Art 5): up to €35,000,000 or 7% of global annual turnover — whichever is higher (lower for SMEs under Art 99(6)).

High-risk obligation violations (Art 9–Art 15, Art 43): up to €15,000,000 or 3% of global annual turnover.

Not sure whether your system falls under Annex III §6 or hits one of the Article 5 prohibitions?

Check your AI system in 30 seconds

3 free analyses — no credit card required.

Prohibited practices: what is banned entirely

These are not "high-risk and regulated" — they are banned outright under Art 5. No compliance process makes them legal.

BANNEDArt 5(1)(d)

Predictive policing — risk assessment based solely on profiling

It is prohibited to place on the market, put into service, or use an AI system that makes risk assessments of natural persons to predict the risk of committing a criminal offence, where this is based solely on profiling or on assessing personality traits and characteristics.

Narrow exception — this is still allowed:

AI systems that support a human assessment of a person's involvement in a criminal activity — where that involvement is already based on objective and verifiable facts directly linked to a criminal activity — are not prohibited. The distinction is between AI driving the risk score (banned) versus AI assisting a human already working from concrete facts (allowed).

BANNED (with 3 exceptions)Art 5(1)(h)

Real-time remote biometric identification in public spaces

Using “real-time” remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement is prohibited — unless strictly necessary for one of three defined objectives:

(i)

Search for specific victims of abduction, trafficking in human beings, or sexual exploitation, and for missing persons.

(ii)

Prevention of a specific, substantial and imminent threat to life or physical safety, or a genuine and foreseeable threat of a terrorist attack.

(iii)

Localisation or identification of a person suspected of a criminal offence covered by Annex II, punishable by at least 4 years' custody — for the purpose of investigation, prosecution or executing a criminal penalty.

Mandatory process for the 3 exceptions (Art 5(2)–(3)):

  • Prior judicial or independent administrative authorisation is required before use — or within 24 hours in genuine emergencies.
  • A Fundamental Rights Impact Assessment (Art 27) must be completed and the system registered in the EU database (Art 49) before deployment (registration can follow in genuine urgency cases).
  • Use must be limited to confirming the identity of the specifically targeted individual only.
  • All data must be deleted if the judicial authorisation is rejected.

High-risk AI: Annex III §6 — five categories

These uses are not prohibited — they are lawful if you comply with the full high-risk regime including mandatory notified body certification. All five apply to systems used “by or on behalf of law enforcement authorities, or by Union institutions, bodies, offices or agencies in support of law enforcement authorities.”

A

Victim risk assessment

AI used to assess the risk of a natural person becoming the victim of criminal offences.

B

Polygraph and similar tools

AI used as polygraphs or similar tools to detect deception or physiological responses.

C

Evidence reliability evaluation

AI used to evaluate the reliability of evidence during criminal investigation or prosecution.

D

Recidivism / offending risk scoring

AI used to assess the risk of a person offending or re-offending, or to assess personality traits, characteristics, or past criminal behaviour.

E

Criminal profiling

AI used for profiling of natural persons (as defined in Directive 2016/680) in the course of detection, investigation or prosecution of criminal offences.

Migration and border AI (Annex III §7) — a separate but related set of four high-risk categories covers migration, asylum and border control AI (polygraph-like tools, entry risk assessment, asylum/visa application processing, and biometric border detection). The same notified body requirement and full high-risk obligations apply.

Notified body: mandatory for Annex III §6 and §7

Third-party conformity assessment — no self-assessment option

Under Art 43, high-risk AI systems listed in Annex III are subject to either self-assessment or notified body assessment. For law enforcement AI (Annex III §6) and migration AI (Annex III §7), the sensitivity and fundamental rights implications mean national market surveillance authorities apply heightened scrutiny. In practice, providers of these systems must use an EU-designated notified body listed in the NANDO database.

The notified body will assess conformity with the requirements of Chapter III, Section 2 of the AI Act — including:

  • Risk management system (Art 9)
  • Data governance and data quality (Art 10)
  • Technical documentation (Art 11, Annex IV)
  • Accuracy and robustness (Art 15)
  • Human oversight design (Art 14)

Post-remote biometric identification: judicial authorisation required

Art 26(10)Deployer obligation — applies to post-event footage

This is distinct from the Art 5(1)(h) real-time prohibition. Post-remote biometric identification — identifying individuals from recorded footage — is not prohibited, but it requires authorisation.

  • A law enforcement deployer must request judicial or administrative authorisation before use — or within 48 hours if urgency requires earlier action.
  • Exception: initial identification of a potential suspect based on objective and verifiable facts directly linked to the offence does not require prior authorisation.
  • If authorisation is rejected, use must stop immediately and all data must be deleted.
  • Use must be limited to what is strictly necessary for the specific criminal offence being investigated — no untargeted, speculative use is permitted.
  • No adverse legal decision may be taken based solely on the output of such a system.

Human oversight: the two-person verification rule

Art 14(5)Biometric identification AI only

For high-risk AI systems listed in Annex III point 1(a) — remote biometric identification — Article 14(5) requires that no action or decision is taken by the deployer based on the AI identification unless that identification has been separately verified and confirmed by at least two natural persons with the necessary competence, training and authority.

Law enforcement exception: The two-person verification requirement does not apply to high-risk AI systems used for the purposes of law enforcement, migration, border control or asylum, where Union or national law considers the application of this requirement to be disproportionate.

Fundamental Rights Impact Assessment: when required

Art 27

Under Article 27, deployers that are public authorities — which includes law enforcement authorities — are required to conduct a Fundamental Rights Impact Assessment before deploying any high-risk AI system listed in Annex III.

For real-time remote biometric identification under the narrow Art 5(1)(h) exceptions, the Fundamental Rights Impact Assessment is an explicit pre-condition stated in Art 5(2) — it is not optional even in emergency situations.

Grey-area warning signals for law enforcement AI

Your system may be closer to a ban than you think if it does any of these:

  • Generates a risk score based on demographics, postcodes, or social network analysis without a concrete prior crime link
  • Uses real-time camera feeds with face matching in public transport, stadiums, or city centres
  • Ranks suspects or generates suspect profiles without verified factual basis
  • Outputs a polygraph-like "deception" or "credibility" score
  • Analyses vocal stress, facial micro-expressions, or gait as indicators of criminal intent
Analyse your system against Article 5 and Annex III

No changes are proposed under COM(2025) 836 or COM(2025) 837 for the law enforcement AI provisions of the EU AI Act. The Article 5 prohibitions and Annex III §6 categories remain as enacted. For the broader context of data law changes affecting law enforcement data processing, see the COM(2025) 837 — GDPR & Data Law Changes guide.

Frequently asked questions

Which law enforcement AI practices are banned under the EU AI Act?
Two practices are banned under Article 5. First, Article 5(1)(d) bans AI systems that make risk assessments to predict the risk of a person committing a criminal offence based solely on profiling or personality traits — this is the 'predictive policing' prohibition. The exception is narrow: AI systems that support human assessment of involvement in a criminal activity already based on objective and verifiable facts directly linked to a criminal activity are allowed. Second, Article 5(1)(h) bans 'real-time' remote biometric identification in publicly accessible spaces for law enforcement purposes — except in three narrow, defined emergency situations requiring prior judicial or administrative authorisation. Both violations carry penalties up to €35 million or 7% of global turnover.
What are the 5 high-risk law enforcement AI categories under Annex III §6?
Annex III §6 lists five high-risk AI categories: (1) AI systems used to assess the risk of a person becoming the victim of criminal offences; (2) AI systems used as polygraphs or similar tools; (3) AI systems used to evaluate the reliability of evidence in criminal investigations or prosecutions; (4) AI systems used to assess the risk of a person offending or re-offending, or to assess personality traits, characteristics, or past criminal behaviour; and (5) AI systems used for criminal profiling as defined in Directive 2016/680. All five require full high-risk compliance including a notified body conformity assessment.
Does law enforcement AI need a notified body?
Yes. For practically all Annex III §6 law enforcement AI systems, a notified body is mandatory. Article 43(2) requires a notified body conformity assessment for high-risk AI systems listed in Annex III unless the provider self-certifies against harmonised standards. However, for law enforcement AI (Annex III §6) the self-assessment route is not available — Article 43(3)(g) exempts biometric identification systems from self-assessment, and the sensitivity of law enforcement AI means national market surveillance authorities will verify notation. Providers must use an EU-designated notified body (listed in NANDO).
What is the judicial authorisation rule for post-remote biometric ID?
Article 26(10) requires that a law enforcement deployer using a high-risk AI system for post-remote biometric identification (i.e., identifying a person from recorded footage rather than live) must request prior judicial or administrative authorisation, except when used for the initial identification of a potential suspect based on objective and verifiable facts directly linked to a criminal offence. The authorisation must be requested in advance, or without undue delay and no later than 48 hours if urgency requires earlier use. If the authorisation is rejected, use must stop immediately and all data must be deleted. Use must be limited to what is strictly necessary for the specific criminal investigation.
Do the EU AI Act law enforcement rules apply to private companies selling AI to police?
Yes — the obligations in Annex III §6 apply to AI systems 'intended to be used by or on behalf of law enforcement authorities, or by Union institutions, bodies, offices or agencies in support of law enforcement authorities.' A private AI company selling a recidivism risk scoring system to a police force is the provider of a high-risk AI system and must comply with all provider obligations under Articles 9–21, including technical documentation, risk management, data governance, human oversight features, and the notified body conformity assessment. The police force as deployer must comply with Article 26 obligations, including managing human oversight and maintaining logs.

Related compliance guides

Banned AI Practices (Article 5)

All 8 prohibited uses — social scoring, biometric scraping, predictive policing.

Biometric AI Systems

Face recognition, iris scanning, emotion detection — the full biometric compliance guide.

Conformity Assessment (Article 43)

When a notified body is required versus self-assessment routes.

Fundamental Rights Impact Assessment

Who must do it, what to assess, when the FRIA is a legal pre-condition.

Market Surveillance & Enforcement

Which authority investigates law enforcement AI — national authorities or the AI Office?

Human Oversight (Article 14)

Design requirements, two-person verification, and oversight assignment obligations.

Check your law enforcement AI system now

Regumatrix checks your system against Article 5's prohibitions and every Annex III category — and returns your exact risk tier, whether a notified body is required, the Article 26(10) authorisation obligations, and your fine exposure under Article 99. Cited report. ~30 seconds.

Analyse your AI system

3 free analyses — no credit card required