Biometric AI has two distinct compliance tracks under the EU AI Act. Some uses are outright banned — in force since February 2025, with fines up to €35,000,000 / 7% of turnover. Others are classified as high-risk under Annex III, triggering full provider and deployer obligations from August 2026 and — uniquely among Annex III categories — potential notified body involvement.
Two separate compliance problems — check which one applies to you first
Track 1 — Already prohibited (Feb 2025): Four biometric practices are banned with immediate effect. Violating any of them carries fines up to €35,000,000 or 7% of global turnover under Article 99(3).
Track 2 — High-risk from Aug 2026: Three Annex III HR-1 biometric system types carry full Articles 9–21 obligations. Biometrics is the only Annex III category where a notified body assessment may be compulsory. Violation fines: €15,000,000 or 3% under Article 99(4).
Not sure whether your biometric system is prohibited or high-risk?
Analyse your systemArticle 5(1) lists eight prohibited AI practices. Four relate specifically to biometric AI. These prohibitions have been in legal effect since 2 February 2025 and carry the highest fine ceiling in the regulation — up to €35,000,000 or 7% of total worldwide annual turnover under Article 99(3), whichever is higher.
Banned: creating or expanding facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage. This covers commercial providers building face recognition databases from publicly available photos, as well as any system that automatically collects facial images at scale without targeting specific individuals. There is no exception.
Banned: AI systems that infer the emotions of natural persons in the areas of the workplace and educational institutions. This covers mood-detection cameras in offices, facial expression analysis in classrooms, and voice-tone analysis tools used to monitor employee engagement. One exception applies: where the system is intended for medical or safety reasons — for example, detecting driver fatigue in a commercial vehicle.
Banned: biometric categorisation systems that use biometric data to deduce or infer a person's race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation. The prohibition applies to individually categorising natural persons — it does not cover labelling or filtering of lawfully acquired biometric datasets, such as images, or the categorising of biometric data for law enforcement purposes.
Banned as a default: the use of real-time remote biometric identification systems in publicly accessible spaces for law enforcement purposes. Three narrow exceptions apply where the use is strictly necessary for:
Even where an exception applies, prior judicial or independent administrative authorisation is required (or retroactive authorisation within 24 hours in justified urgency). No adverse legal decision may rest solely on the system's output. This prohibition applies only to law enforcement use — private sector real-time biometric identification in public is not covered by Art 5(1)(h), but it remains high-risk under Annex III 1(a).
Article 6(2) makes all systems in Annex III high-risk by definition. Annex III point 1 lists three biometric categories — provided their use is permitted under relevant Union or national law. Meeting any of these descriptions triggers full high-risk obligations from August 2026.
AI systems that identify a natural person by searching across a database of many individuals using biometric data captured from a distance — for example, face recognition camera systems in transport hubs, iris-scan border crossing technology, or gait-analysis surveillance tools. The critical distinction: this category does not cover biometric verification, where the sole purpose is to confirm that a specific person is the person they claim to be. Verification (1:1 matching) is outside Annex III; identification (1:N searching) is high-risk.
AI systems intended for biometric categorisation of natural persons according to sensitive or protected attributes or characteristics based on the inference of those attributes or characteristics from biometric data. The category sits at the boundary of the Art 5(1)(g) prohibition — systems that categorise by protected characteristics entirely (race, religion, sexual orientation etc.) are banned. Systems that categorise by sensitive biometric attributes that stop short of the prohibited inferences are high-risk. This requires a careful legal assessment of each system's actual output categories.
AI systems intended to be used for emotion recognition — that is, systems that infer the emotional state of a natural person from biometric data such as facial expressions, voice patterns, body language, or physiological signals. These are high-risk when used outside the workplace and outside educational institutions (those two contexts trigger Art 5(1)(f) prohibition instead). Emotion recognition in customer service, healthcare, marketing, or security screening contexts is high-risk, not banned.
Annex III point 1 (biometrics) is the only category in Annex III where a notified body may be required. For all other Annex III categories (points 2 through 8 — education, HR, financial services, law enforcement, etc.), Article 43(2) specifies that providers must follow only the internal control procedure in Annex VI, with no notified body involvement.
For biometrics (Annex III point 1), Article 43(1) creates a conditional choice:
A provider who has applied the harmonised standards referred to in Article 40, covering all the relevant requirements in Section 2, may choose the internal control procedure under Annex VI. This is a self-declaration by the provider with no third-party involvement. As of mid-2026, comprehensive harmonised standards specifically for biometric AI systems were still in development under CEN/CENELEC mandates. Providers relying on this path should verify which standards exist and whether they fully cover all requirements.
A provider must follow the Annex VII procedure — which requires an accredited notified body to assess both the quality management system and the technical documentation — in any of these four situations: (1) no harmonised standards exist; (2) harmonised standards exist but the provider has not applied them or has applied them only partially; (3) relevant common specifications exist but the provider has not applied them; or (4) a harmonised standard has been published with a restriction, and only on the restricted part. In practice, given the early state of biometric AI standards, most providers currently fall into situations (1) or (2) — meaning notified body assessment is effectively mandatory.
Where the high-risk biometric AI system is intended to be put into service by law enforcement, immigration or asylum authorities, or by Union institutions and bodies in support of those authorities, the market surveillance authority — not a commercial notified body — acts as the notified body under Article 74(8) or (9). This means commercial notified bodies are not used for these sensitive state deployments; the regulatory authority takes their place.
Certificate validity under Article 44
Notified body certificates for Annex III high-risk AI systems are valid for a maximum of four years, with renewable extensions. The notified body can suspend, withdraw, or restrict a certificate if the system no longer meets Section 2 requirements. Each substantial modification triggers a new conformity assessment — pre-determined changes documented at initial assessment time do not count as substantial modifications.
Once a biometric AI system falls within Annex III HR-1, providers must comply with the full set of Articles 9–21 requirements before placing the system on the market or putting it into service.
Establish, document, and maintain a risk management system throughout the entire lifecycle. For biometric AI this must address risks to the fundamental rights of individuals who are identified, categorised, or whose emotions are inferred — in particular risks of misidentification, discriminatory outputs, and unauthorised use of data. Foreseeable misuse scenarios (such as a remote biometric ID system being repurposed for mass surveillance) must be explicitly identified and mitigated.
Biometric AI training datasets must be assessed for biases that could lead to differential accuracy across demographic groups — for example, lower recognition rates for darker skin tones or for non-Western facial features. Article 10(5) permits the processing of special categories of personal data (including biometric data) strictly for bias detection and correction, subject to strict conditions including pseudonymisation, no third-party transfer, and deletion once bias is corrected.
Draw up and maintain technical documentation that demonstrates compliance with all Section 2 requirements. For biometric systems, this includes documenting the training data composition, the intended deployment contexts, accuracy metrics across identified demographic groups, and the conditions under which accuracy degrades.
The system must technically allow for automatic recording of events throughout its lifetime. Providers retain logs under their control for at least six months. The logging must facilitate post-market monitoring and enable deployers to meet their Article 26(6) log-retention obligations on their side.
Supply deployers with instructions that include accuracy and robustness metrics, known circumstances that reduce accuracy (e.g., poor lighting, certain facial coverings, age, ethnicity), human oversight requirements, and log collection mechanisms. Deployers of biometric AI rely heavily on these instructions to meet their own obligations and the Art 50(3) transparency duty.
Design the system so that responsible persons can monitor operation, detect anomalies and misidentifications, avoid automation bias, correctly interpret outputs, and override or stop the system in any situation. For remote biometric identification deployed by law enforcement, Article 14(5) adds an additional safeguard: no action or decision may be taken on the basis of an identification unless separately verified and confirmed by at least two competent persons.
Maintain consistent accuracy throughout the lifecycle. Declare accuracy metrics in the instructions for use. For biometric systems, accuracy metrics must reflect performance across relevant demographic groups. Protect against adversarial attacks including presentation attacks (spoofing with photos or masks), data poisoning of the training set, and confidentiality attacks designed to reconstruct biometric data from model outputs.
Beyond the high-risk obligations in Articles 9–21 and 26, Article 50(3) imposes a separate transparency duty on deployers of two biometric system types. This obligation has been in force since In force since February 2025 — it does not wait for the August 2026 high-risk deadline.
Deployers of an emotion recognition system or a biometric categorisation system must inform the natural persons exposed thereto of the operation of the system. This is not a mere privacy notice — it must be specific to the operation of the biometric system. The deployer must also process any personal data in accordance with GDPR (Regulation 2016/679), or with Directive 2016/680 for law enforcement processing. Article 50(3) expressly does not apply to systems permitted by law to detect, prevent, or investigate criminal offences with appropriate safeguards.
In practice: a retailer using emotion detection cameras in their store must inform shoppers that emotion recognition AI is in operation — at the entrance or point of first exposure. A workplace that deploys biometric categorisation to allocate tasks must inform employees. Failure to provide this disclosure is itself an Article 99(4) violation, carrying fines up to €15,000,000 or 3% of global annual turnover.
These situations indicate a higher-than-average compliance risk and warrant legal assessment before product launch or deployment:
No changes to Annex III point 1, the Article 5 biometric prohibitions, or the Article 43 conformity assessment rules for biometrics are proposed under COM(2025) 836 or COM(2025) 837.
No. Annex III point 1(a) explicitly excludes AI systems intended for biometric verification — that is, confirming that a specific natural person is who they claim to be — from the remote biometric identification high-risk category. A phone's face-unlock works solely to verify one claimed identity. Remote biometric IDENTIFICATION, by contrast, identifies an unknown person by searching a database of many people. Only the identification use case is high-risk under Annex III; biometric verification falls outside it.
Not by Article 5(1)(f). The prohibition on emotion recognition applies only in the workplace and in educational institutions. Emotion detection used in a customer service context — for example, detecting frustration to route callers to human agents — is not prohibited by Article 5. However, it is classified as high-risk under Annex III point 1(c), so the full Articles 9–21 provider obligations and Article 26 deployer obligations apply from August 2026. Article 50(3) also requires the deployer to inform the persons whose emotions are being detected.
It depends. Article 43(1) gives providers two paths for Annex III point 1 (biometrics) systems where harmonised standards have been applied: internal control under Annex VI, or a notified body assessment under Annex VII. However, if no harmonised standards exist, if the provider has not applied them fully, or if standards were published with a restriction, the provider must follow Annex VII — which requires a notified body. In practice, as of mid-2026, comprehensive harmonised standards for biometric AI remain in development, which means the notified body route is likely mandatory for most systems. This is materially different from education, HR, or financial services AI (Annex III points 2–8), where only internal control under Annex VI applies regardless.
No — but it is heavily restricted. Article 5(1)(h) prohibits the use of real-time remote biometric identification systems in publicly accessible spaces for law enforcement purposes unless the use is strictly necessary for one of three narrow objectives: (i) searching for specific victims of abduction, trafficking, sexual exploitation, or missing persons; (ii) preventing a specific, substantial, and imminent threat to life or a genuine and foreseeable terrorist attack; or (iii) locating or identifying a person suspected of an Annex II criminal offence punishable by at least four years' custodial sentence. Even when one of these objectives applies, prior judicial or independent administrative authorisation is required — or a 24-hour retroactive authorisation in duly justified urgency. Private sector real-time biometric identification in public spaces is not covered by Art 5(1)(h) at all, but it is high-risk under Annex III 1(a) and subject to full high-risk obligations.
Article 5(1)(g) is an outright ban: it prohibits biometric categorisation systems that infer or deduce a person's race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation from their biometric data. This prohibition applies regardless of purpose or context — the system cannot be built and placed on the market at all. Annex III 1(b) is the high-risk classification for biometric categorisation systems used to categorise persons according to sensitive or protected attributes more broadly. A system that performs biometric categorisation but does not cross into the prohibited inferences is high-risk — meaning it can be built and deployed, but only after satisfying the full Articles 9–21 conformity obligations.
Prohibited AI Practices — Article 5
All eight banned uses explained, with the €35M/7% penalty
AI Provider Obligations
Full Arts 9–21 checklist, CE marking, and EU database registration
AI Deployer Obligations
Article 26 checklist for organisations that use biometric tools
EU AI Act Fines and Penalties
Four penalty tiers, €35M/7% vs €15M/3%, SME cap rules
High-Risk AI System Checklist
Step-by-step checklist covering all Annex III categories
EU AI Act Timeline 2025–2030
Every enforcement deadline in one place
Biometric AI has both prohibited and high-risk tiers. Regumatrix helps you map your system to the right compliance path and build a complete obligations roadmap.
Get started free