Law enforcement AI sits at the intersection of the EU AI Act's two harshest regimes — outright prohibition and the full high-risk compliance track. Two practices are banned entirely. Five more are high-risk under Annex III §6 and require a mandatory notified body assessment. This guide explains what is banned, what is regulated, and what deployers must do before using any of it.
Prohibited practice violations (Art 5): up to €35,000,000 or 7% of global annual turnover — whichever is higher (lower for SMEs under Art 99(6)).
High-risk obligation violations (Art 9–Art 15, Art 43): up to €15,000,000 or 3% of global annual turnover.
Not sure whether your system falls under Annex III §6 or hits one of the Article 5 prohibitions?
Check your AI system in 30 seconds3 free analyses — no credit card required.
These are not "high-risk and regulated" — they are banned outright under Art 5. No compliance process makes them legal.
It is prohibited to place on the market, put into service, or use an AI system that makes risk assessments of natural persons to predict the risk of committing a criminal offence, where this is based solely on profiling or on assessing personality traits and characteristics.
Narrow exception — this is still allowed:
AI systems that support a human assessment of a person's involvement in a criminal activity — where that involvement is already based on objective and verifiable facts directly linked to a criminal activity — are not prohibited. The distinction is between AI driving the risk score (banned) versus AI assisting a human already working from concrete facts (allowed).
Using “real-time” remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement is prohibited — unless strictly necessary for one of three defined objectives:
Search for specific victims of abduction, trafficking in human beings, or sexual exploitation, and for missing persons.
Prevention of a specific, substantial and imminent threat to life or physical safety, or a genuine and foreseeable threat of a terrorist attack.
Localisation or identification of a person suspected of a criminal offence covered by Annex II, punishable by at least 4 years' custody — for the purpose of investigation, prosecution or executing a criminal penalty.
Mandatory process for the 3 exceptions (Art 5(2)–(3)):
These uses are not prohibited — they are lawful if you comply with the full high-risk regime including mandatory notified body certification. All five apply to systems used “by or on behalf of law enforcement authorities, or by Union institutions, bodies, offices or agencies in support of law enforcement authorities.”
Victim risk assessment
AI used to assess the risk of a natural person becoming the victim of criminal offences.
Polygraph and similar tools
AI used as polygraphs or similar tools to detect deception or physiological responses.
Evidence reliability evaluation
AI used to evaluate the reliability of evidence during criminal investigation or prosecution.
Recidivism / offending risk scoring
AI used to assess the risk of a person offending or re-offending, or to assess personality traits, characteristics, or past criminal behaviour.
Criminal profiling
AI used for profiling of natural persons (as defined in Directive 2016/680) in the course of detection, investigation or prosecution of criminal offences.
Migration and border AI (Annex III §7) — a separate but related set of four high-risk categories covers migration, asylum and border control AI (polygraph-like tools, entry risk assessment, asylum/visa application processing, and biometric border detection). The same notified body requirement and full high-risk obligations apply.
Under Art 43, high-risk AI systems listed in Annex III are subject to either self-assessment or notified body assessment. For law enforcement AI (Annex III §6) and migration AI (Annex III §7), the sensitivity and fundamental rights implications mean national market surveillance authorities apply heightened scrutiny. In practice, providers of these systems must use an EU-designated notified body listed in the NANDO database.
The notified body will assess conformity with the requirements of Chapter III, Section 2 of the AI Act — including:
This is distinct from the Art 5(1)(h) real-time prohibition. Post-remote biometric identification — identifying individuals from recorded footage — is not prohibited, but it requires authorisation.
For high-risk AI systems listed in Annex III point 1(a) — remote biometric identification — Article 14(5) requires that no action or decision is taken by the deployer based on the AI identification unless that identification has been separately verified and confirmed by at least two natural persons with the necessary competence, training and authority.
Law enforcement exception: The two-person verification requirement does not apply to high-risk AI systems used for the purposes of law enforcement, migration, border control or asylum, where Union or national law considers the application of this requirement to be disproportionate.
Under Article 27, deployers that are public authorities — which includes law enforcement authorities — are required to conduct a Fundamental Rights Impact Assessment before deploying any high-risk AI system listed in Annex III.
For real-time remote biometric identification under the narrow Art 5(1)(h) exceptions, the Fundamental Rights Impact Assessment is an explicit pre-condition stated in Art 5(2) — it is not optional even in emergency situations.
Your system may be closer to a ban than you think if it does any of these:
No changes are proposed under COM(2025) 836 or COM(2025) 837 for the law enforcement AI provisions of the EU AI Act. The Article 5 prohibitions and Annex III §6 categories remain as enacted. For the broader context of data law changes affecting law enforcement data processing, see the COM(2025) 837 — GDPR & Data Law Changes guide.
Banned AI Practices (Article 5)
All 8 prohibited uses — social scoring, biometric scraping, predictive policing.
Biometric AI Systems
Face recognition, iris scanning, emotion detection — the full biometric compliance guide.
Conformity Assessment (Article 43)
When a notified body is required versus self-assessment routes.
Fundamental Rights Impact Assessment
Who must do it, what to assess, when the FRIA is a legal pre-condition.
Market Surveillance & Enforcement
Which authority investigates law enforcement AI — national authorities or the AI Office?
Human Oversight (Article 14)
Design requirements, two-person verification, and oversight assignment obligations.
Regumatrix checks your system against Article 5's prohibitions and every Annex III category — and returns your exact risk tier, whether a notified body is required, the Article 26(10) authorisation obligations, and your fine exposure under Article 99. Cited report. ~30 seconds.
Analyse your AI system3 free analyses — no credit card required