The EU AI Act creates two separate obligations for emotion recognition AI — an outright prohibition in specific contexts, and a transparency requirement that applies everywhere else.
Prohibited practice — Art 5(1)(f) · In force since February 2025
Emotion recognition AI in the workplace and educational institutions is banned outright. Maximum penalty: €35,000,000 or 7% of global annual turnover — whichever is higher — under Art 99(3). The parallel transparency obligation under Art 50(3) carries €15,000,000 or 3% for violations.
Not sure which track applies to your system? Regumatrix classifies your AI and returns the exact obligations in under a minute →
An emotion recognition system is “an AI system for the purpose of identifying or inferring emotions or intentions of natural persons on the basis of their biometric data.”
This includes engagement scoring, attention detection, stress monitoring, and sentiment analysis — if they derive inferences from biometric signals such as facial images, voice tone, eye movement, or typing rhythm.
Art 5(1)(f) prohibits placing on the market, putting into service for this purpose, or using emotion recognition AI in two specific contexts. The prohibition applies regardless of intent — it covers both the objective and the effect of the practice.
Any context where the natural persons being monitored are employees or workers. Includes offices, factory floors, remote work via video calls, call centres, HR screening processes, and productivity monitoring tools.
Examples: facial expression analysis in meetings, voice tone monitoring on support calls, engagement scoring in productivity software applied to staff.
Schools, universities, training centres, and online learning platforms providing formal education. The prohibition applies to students and learners.
Examples: exam proctoring software that infers stress or distraction, classroom AI that monitors student attention from camera feeds, student engagement scoring tools.
Outside the prohibited contexts, emotion recognition is not banned. But Art 50(3) creates a transparency obligation that applies to every deployer of an emotion recognition system — regardless of sector, context, or purpose.
Penalty for Art 50 violations: Art 99(4) — up to €15,000,000 or 3% of total worldwide annual turnover.
Exception: The Art 50(3) transparency obligation does not apply to AI systems permitted by law to detect, prevent, or investigate criminal offences — subject to appropriate safeguards for third-party rights and in accordance with Union law.
An emotion recognition system that additionally performs biometric categorisation on sensitive attributes — inferring race, political opinions, religious beliefs, or sexual orientation from biometric data — is classified as high-risk under Annex III point 1(c). This triggers the full Arts 9–21 provider obligations and Art 43 conformity assessment from August 2026.
See the Biometric AI compliance guide for the full high-risk classification and obligation stack.
Your system may be in scope if it does any of the following:
No changes are proposed under COM(2025) 836 or COM(2025) 837 for the emotion recognition obligations under Art 5(1)(f) and Art 50(3).
No. Article 5(1)(f) bans emotion recognition AI only in the workplace and educational institutions. In all other contexts — customer service tools, retail applications, healthcare monitoring, or safety systems — the prohibition does not apply. However, Article 50(3) creates a separate transparency obligation that applies to every deployer of an emotion recognition system regardless of context: the natural persons being monitored must be informed. The prohibition and the transparency obligation are independent tracks with different penalties.
Yes. If the emotion recognition is applied to employees during work — including remote work via video calls, productivity monitoring tools, or meeting software — it falls within the workplace prohibition. The EU AI Act does not distinguish between physical and digital workplaces. A tool that monitors employee facial expressions, voice tone, or typing patterns to infer emotional states violates Article 5(1)(f) when used in an employment context. Medical or safety reasons remain the only valid exceptions.
Article 50(3) requires that deployers inform the natural persons exposed to the system of its operation. This disclosure must be clear and distinguishable — not buried in terms of service — and must be provided at the latest at the time of first interaction or exposure (Article 50(5)). It must comply with applicable accessibility requirements. Deployers must also process personal data in accordance with GDPR or the Law Enforcement Directive as applicable. Failure to comply with Article 50 obligations carries a penalty of up to €15,000,000 or 3% of global annual turnover under Article 99(4).
Yes — two narrow exceptions are stated in Article 5(1)(f) itself. First, emotion recognition AI intended for medical reasons is excluded. This covers clinical applications such as mental health monitoring in therapeutic or diagnostic settings, not general employee wellness apps. Second, emotion recognition AI intended for safety reasons is excluded. This covers safety-critical applications such as driver fatigue detection or industrial equipment operator alertness monitoring. The intended purpose — as specified by the provider under Article 3(12) — must genuinely be medical or safety in nature. A system labelled as a 'wellbeing tool' that primarily monitors employee performance does not qualify.
Likely yes, if the system infers internal states from biometric data. The EU AI Act defines an emotion recognition system as 'an AI system for the purpose of identifying or inferring emotions or intentions of natural persons on the basis of their biometric data' (Article 3(39)). Engagement, attention, focus, stress, and fatigue levels are inferences about internal mental or physiological states derived from observable biometric signals. If your system processes camera feeds, eye tracking, voice patterns, or facial expressions to infer such states, the definition applies regardless of how the product is marketed.
Article 5(1)(f) violations fall under Article 99(3): the maximum fine is €35,000,000 or 7% of total worldwide annual turnover for the preceding financial year, whichever is higher. For SMEs and start-ups, Article 99(6) applies the lower of the two figures instead of the higher. These fines apply in addition to any GDPR enforcement action for the personal data processing involved.
Prohibited AI Practices — Article 5
All 8 banned uses explained with the €35M/7% penalty breakdown
Biometric AI Systems
Annex III HR-1 — face recognition, categorisation, conformity rules
AI Transparency Obligations (Art 50)
Chatbot disclosure, deepfake labeling, and Art 50(3) requirements
High-Risk AI Checklist
Full classification guide covering all 8 Annex III domains
EU AI Act Fines and Penalties
Four penalty tiers — €35M/7%, €15M/3%, SME inverse cap rule
Education AI Compliance Guide
EU AI Act obligations for EdTech — exam proctoring, tutoring, admission AI
Regumatrix checks your system against Article 5(1)(f), Article 50(3), and every other relevant provision — returning your risk tier, the specific obligations that apply, and your fine exposure under Article 99.
Get started free