RegumatrixBeta
GuidesPathfinderAI RightsFreeAbout
Sign inGet Started Free

Reference

  • All Articles
  • Official Text ↗

Compliance Guides

  • Compliance Timeline
  • High-Risk Checklist
  • Healthcare AI
  • HR & Recruitment
  • Financial Services
  • GPAI / Foundation Models
  • View all guides →

Product

  • Risk Pathfinder
  • AI Rights Check
  • Get Started Free
  • About
  • Feedback
  • Contact

Legal

  • Privacy Policy
  • Terms

Regumatrix — AI compliance powered by Regulation (EU) 2024/1689

This tool is informational only and does not constitute legal advice.

Grounded in Regulation (EU) 2024/1689 · verified 4 Apr 2026
HomeComplianceEmotion Recognition AI
Prohibited in workplace + schoolsTransparency required everywhere

Emotion Recognition AI: What the EU AI Act Bans and What It Doesn't

The EU AI Act creates two separate obligations for emotion recognition AI — an outright prohibition in specific contexts, and a transparency requirement that applies everywhere else.

Prohibited practice — Art 5(1)(f) · In force since February 2025

Emotion recognition AI in the workplace and educational institutions is banned outright. Maximum penalty: €35,000,000 or 7% of global annual turnover — whichever is higher — under Art 99(3). The parallel transparency obligation under Art 50(3) carries €15,000,000 or 3% for violations.

Not sure which track applies to your system? Regumatrix classifies your AI and returns the exact obligations in under a minute →

Definition — Art 3(39)

An emotion recognition system is “an AI system for the purpose of identifying or inferring emotions or intentions of natural persons on the basis of their biometric data.”

This includes engagement scoring, attention detection, stress monitoring, and sentiment analysis — if they derive inferences from biometric signals such as facial images, voice tone, eye movement, or typing rhythm.

Track 1 — Banned contexts

Art 5(1)(f) prohibits placing on the market, putting into service for this purpose, or using emotion recognition AI in two specific contexts. The prohibition applies regardless of intent — it covers both the objective and the effect of the practice.

The workplace

Any context where the natural persons being monitored are employees or workers. Includes offices, factory floors, remote work via video calls, call centres, HR screening processes, and productivity monitoring tools.

Examples: facial expression analysis in meetings, voice tone monitoring on support calls, engagement scoring in productivity software applied to staff.

Educational institutions

Schools, universities, training centres, and online learning platforms providing formal education. The prohibition applies to students and learners.

Examples: exam proctoring software that infers stress or distraction, classroom AI that monitors student attention from camera feeds, student engagement scoring tools.

Two narrow exceptions — both require genuine medical or safety intent

Medical reasons: Clinical applications such as monitoring patient emotional states in therapeutic or diagnostic settings. Does not cover general wellness apps without clinical oversight.
Safety reasons: Safety-critical monitoring such as driver fatigue detection or industrial operator alertness systems. The Art 3(12) intended purpose must be safety — not employee productivity framed as wellbeing.

Track 2 — Transparency obligation everywhere

Outside the prohibited contexts, emotion recognition is not banned. But Art 50(3) creates a transparency obligation that applies to every deployer of an emotion recognition system — regardless of sector, context, or purpose.

What deployers must do under Art 50(3)

  • Inform natural persons exposed to the system of its operation — not just general users, but the specific individuals being monitored
  • Make the disclosure clear and distinguishable — not buried in terms of service or privacy policies
  • Provide the disclosure at the latest at the time of first interaction or exposure (Art 50(5))
  • Comply with applicable accessibility requirements
  • Process personal data in accordance with GDPR or the applicable data protection law

Penalty for Art 50 violations: Art 99(4) — up to €15,000,000 or 3% of total worldwide annual turnover.

Exception: The Art 50(3) transparency obligation does not apply to AI systems permitted by law to detect, prevent, or investigate criminal offences — subject to appropriate safeguards for third-party rights and in accordance with Union law.

When emotion recognition is also high-risk

An emotion recognition system that additionally performs biometric categorisation on sensitive attributes — inferring race, political opinions, religious beliefs, or sexual orientation from biometric data — is classified as high-risk under Annex III point 1(c). This triggers the full Arts 9–21 provider obligations and Art 43 conformity assessment from August 2026.

See the Biometric AI compliance guide for the full high-risk classification and obligation stack.

Warning signals — get a precise classification

Your system may be in scope if it does any of the following:

  • Analyses facial expressions, eye movement, or micro-expressions in any context
  • Infers mood, stress, engagement, or attention from voice tone or speech patterns
  • Uses biometric sensors to derive emotional, attentional, or intentional states
  • Monitors employees during remote or in-person work activities
  • Processes classroom or exam video to score student attention or behaviour
  • Labels or categorises users by inferred emotional or mental state
Check your AI system free

No changes are proposed under COM(2025) 836 or COM(2025) 837 for the emotion recognition obligations under Art 5(1)(f) and Art 50(3).

Frequently asked questions

Does the EU AI Act ban all emotion recognition AI?

No. Article 5(1)(f) bans emotion recognition AI only in the workplace and educational institutions. In all other contexts — customer service tools, retail applications, healthcare monitoring, or safety systems — the prohibition does not apply. However, Article 50(3) creates a separate transparency obligation that applies to every deployer of an emotion recognition system regardless of context: the natural persons being monitored must be informed. The prohibition and the transparency obligation are independent tracks with different penalties.

Does Article 5(1)(f) apply to remote working tools and video call software?

Yes. If the emotion recognition is applied to employees during work — including remote work via video calls, productivity monitoring tools, or meeting software — it falls within the workplace prohibition. The EU AI Act does not distinguish between physical and digital workplaces. A tool that monitors employee facial expressions, voice tone, or typing patterns to infer emotional states violates Article 5(1)(f) when used in an employment context. Medical or safety reasons remain the only valid exceptions.

What does the Article 50(3) disclosure actually require from a deployer?

Article 50(3) requires that deployers inform the natural persons exposed to the system of its operation. This disclosure must be clear and distinguishable — not buried in terms of service — and must be provided at the latest at the time of first interaction or exposure (Article 50(5)). It must comply with applicable accessibility requirements. Deployers must also process personal data in accordance with GDPR or the Law Enforcement Directive as applicable. Failure to comply with Article 50 obligations carries a penalty of up to €15,000,000 or 3% of global annual turnover under Article 99(4).

Are there any exceptions to the workplace and school ban?

Yes — two narrow exceptions are stated in Article 5(1)(f) itself. First, emotion recognition AI intended for medical reasons is excluded. This covers clinical applications such as mental health monitoring in therapeutic or diagnostic settings, not general employee wellness apps. Second, emotion recognition AI intended for safety reasons is excluded. This covers safety-critical applications such as driver fatigue detection or industrial equipment operator alertness monitoring. The intended purpose — as specified by the provider under Article 3(12) — must genuinely be medical or safety in nature. A system labelled as a 'wellbeing tool' that primarily monitors employee performance does not qualify.

My system detects 'engagement' or 'attention' using camera data — is that emotion recognition?

Likely yes, if the system infers internal states from biometric data. The EU AI Act defines an emotion recognition system as 'an AI system for the purpose of identifying or inferring emotions or intentions of natural persons on the basis of their biometric data' (Article 3(39)). Engagement, attention, focus, stress, and fatigue levels are inferences about internal mental or physiological states derived from observable biometric signals. If your system processes camera feeds, eye tracking, voice patterns, or facial expressions to infer such states, the definition applies regardless of how the product is marketed.

What is the penalty for using prohibited emotion recognition AI in the workplace?

Article 5(1)(f) violations fall under Article 99(3): the maximum fine is €35,000,000 or 7% of total worldwide annual turnover for the preceding financial year, whichever is higher. For SMEs and start-ups, Article 99(6) applies the lower of the two figures instead of the higher. These fines apply in addition to any GDPR enforcement action for the personal data processing involved.

Related guides

Prohibited AI Practices — Article 5

All 8 banned uses explained with the €35M/7% penalty breakdown

Biometric AI Systems

Annex III HR-1 — face recognition, categorisation, conformity rules

AI Transparency Obligations (Art 50)

Chatbot disclosure, deepfake labeling, and Art 50(3) requirements

High-Risk AI Checklist

Full classification guide covering all 8 Annex III domains

EU AI Act Fines and Penalties

Four penalty tiers — €35M/7%, €15M/3%, SME inverse cap rule

Education AI Compliance Guide

EU AI Act obligations for EdTech — exam proctoring, tutoring, admission AI

Know exactly which track applies to your AI

Regumatrix checks your system against Article 5(1)(f), Article 50(3), and every other relevant provision — returning your risk tier, the specific obligations that apply, and your fine exposure under Article 99.

Get started free