RegumatrixBeta
GuidesPathfinderAI RightsFreeAbout
Sign inGet Started Free

Reference

  • All Articles
  • Official Text ↗

Compliance Guides

  • Compliance Timeline
  • High-Risk Checklist
  • Healthcare AI
  • HR & Recruitment
  • Financial Services
  • GPAI / Foundation Models
  • View all guides →

Product

  • Risk Pathfinder
  • AI Rights Check
  • Get Started Free
  • About
  • Feedback
  • Contact

Legal

  • Privacy Policy
  • Terms

Regumatrix — AI compliance powered by Regulation (EU) 2024/1689

This tool is informational only and does not constitute legal advice.

Grounded in Regulation (EU) 2024/1689 · verified 4 Apr 2026
HomeComplianceBanned AI Practices
In force since 2 February 2025Max fine: €35 M or 7% of global turnover

8 AI Practices That Are Illegal in the EU Right Now

Article 5 of the EU AI Act doesn't put these on a compliance to-do list — it bans them outright. No grace period, no phase-in. Since 2 February 2025, building, selling, or using any of these AI systems in the EU is a direct violation.

Article 99 penalty for prohibited AI: highest fine tier

Violations of Article 5 carry the largest penalty in the EU AI Act — up to €35,000,000 or 7% of global annual turnover, whichever is higher. For context, that is double the maximum for a high-risk AI violation, and five times the ceiling for supplying incorrect information to regulators.

Not sure if your AI product uses any of these techniques?

Describe what your system does and Regumatrix will check it against Article 5 — and every other article of the EU AI Act — in about 30 seconds. The report names your risk tier, the specific provisions that apply, and the actions you need to take.

Check my system now — 3 free analyses included

The 8 prohibited practices

Article 5
A

Subliminal or manipulative AI

BANNED

AI that influences people's decisions using techniques they're not consciously aware of — or deliberately deceptive AI that tricks people into choices that cause them harm.

AI-powered dark patterns that exploit cognitive biases to extract consent or spending
Chatbots that build false emotional attachment to increase purchase frequency
Recommendation systems that exploit addiction loops to override a user's stated preferences
Art. 5(1)(a)
B

Exploiting vulnerable people

BANNED

AI that specifically targets people because of their age, disability, financial stress, or other vulnerability — to manipulate their behaviour in ways that harm them.

Gambling AI that identifies problem gamblers and serves them more aggressive nudges
Loan product AI that targets people in financial distress with high-interest offers
Elder care AI designed to maximise spending by users with cognitive decline
Art. 5(1)(b)
C

Social scoring

BANNED

AI that tracks people's behaviour over time to give them a score — and then uses that score to treat them better or worse in situations unrelated to the original data, or in ways that are disproportionate to their actual behaviour.

A platform that rates users on social behaviour and restricts their access to services based on that score
Insurance AI that penalises customers based on inferred lifestyle scores from social media
Employment AI that factors in social activity scores unrelated to job performance
Art. 5(1)(c)
D

Criminal personality profiling

BANNED

AI that predicts someone is likely to commit a crime based on their personality traits, appearance, or other characteristics — without any objective factual evidence linking them to an actual offence.

Predictive policing AI that flags individuals based on psychological profiles
Risk-scoring tools that assess criminal likelihood from facial features or voice tone
Pre-trial detention AI that inputs personality assessments to predict recidivism
Narrow exception: AI that supports human review of a person who is already a suspect, based on objective and verifiable facts directly linked to a criminal activity, is not prohibited.
Art. 5(1)(d)
E

Facial image scraping

BANNED

AI that builds or expands facial recognition databases by mass-collecting people's photos from the internet or CCTV footage without targeting specific individuals.

Tools that crawl social media to harvest and index billions of faces
Systems that process CCTV footage to add faces to an identification database
APIs that allow customers to upload bulk image collections to build facial datasets
Art. 5(1)(e)
F

Emotion detection at work and school

BANNED

AI that monitors and infers the emotional state of employees or students — in the workplace or in educational institutions.

HR tools that analyse facial expressions in video calls to gauge employee mood or engagement
Exam proctoring software that flags 'suspicious' emotional states during tests
Classroom AI that monitors student facial expressions to infer attention or stress
Narrow exception: Medical applications (e.g. detecting distress in clinical settings) and safety-critical uses (e.g. driver fatigue detection) are explicitly excluded from this ban.
Art. 5(1)(f)
G

Biometric categorisation by sensitive attributes

BANNED

AI that uses biometric data — face scans, fingerprints, gestures — to infer or categorise a person's race, religion, political views, sexual orientation, or trade union membership.

AI that estimates race from facial scan data for any commercial or government purpose
Systems that infer sexual orientation from analysis of appearance or movement patterns
Tools that classify individuals by political affiliation from facial features
Art. 5(1)(g)
H

Real-time facial recognition by police in public

BANNED

Law enforcement AI that scans crowds in real time to identify people in public spaces — without specific judicial authorisation for each use.

AI camera networks scanning festival crowds or public transport against watchlists
Real-time identification of protesters or event attendees by police
Automated vehicle-mounted systems scanning pedestrians against a database
Narrow exception: Three narrow exceptions exist: searching for missing persons or trafficking victims; preventing an imminent and specific terrorist threat; identifying suspects in serious crimes (4+ year sentences). Each use requires prior judicial authorisation and registration in the EU database.
Art. 5(1)(h)

Does your AI “border” on any of these?

Most Article 5 violations don't come from companies that set out to break the law. They come from features that seemed innocuous when designed — engagement optimisation, personalisation, behavioural analytics — that cross a line when examined against the actual regulation text. Article 5 targets both the objective and the effect of a practice. That means unintended harm is still a violation.

If you have any of the following in your product, it's worth getting a precise analysis before you launch or continue:

  • Personalisation that factors in emotional state or vulnerability signals
  • Engagement scoring that can influence behaviour over time
  • Emotion or sentiment detection in any workplace or education context
  • Any biometric processing that derives personal characteristics
  • Behavioural scoring that feeds into consequential decisions
Check my system against Article 5 →
PROPOSAL — not yet enacted lawCOM(2025) 836 & COM(2025) 837

What the Digital Omnibus proposals change — and what they do not

COM(2025) 837 — new GDPR lawful basis for AI training (Art 3 pt3): does it create a loophole in Article 5?

837 proposes adding Art 9(2)(k) to the GDPR — a new lawful basis permitting the processing of special category personal data (including biometric data, health data, and data revealing racial or ethnic origin) specifically for the development and operation of AI systems and AI models.

Some readers will assume this creates a loophole in Art 5(1)(g) — the ban on biometric categorisation by sensitive attributes. It does not. Here is why:

  • GDPR Art 9(2)(k) is about whether you can process biometric data to train your AI. It is a lawful basis for data processing.
  • AI Act Art 5(1)(g) is about whether your AI system can output categorisations of race, religion, or sexual orientation from biometric data. It is a prohibition on what your system does.
  • Both laws apply simultaneously. If 837 is enacted, you could have a lawful GDPR basis to train on biometric data — and still be fully prohibited under the AI Act from using that training to categorise people by protected attributes as output.

837 Art 3 pt3 (proposed GDPR Art 9(2)(k)) · Current law: Art 5(1)(g) EU AI Act applies in full

COM(2025) 836 — who enforces Article 5 violations may change (Art 1 pt25)

Under current law, Art 5 violations are enforced by national market surveillance authorities — one per EU member state. Under 836 Art 1 pt25, the AI Office would gain exclusive competence to supervise and enforce against two specific actor types:

  • Companies that are both the GPAI model provider and the AI system provider — i.e. integrated products built on their own foundation model
  • AI systems embedded in Very Large Online Platforms (VLOPs) or Very Large Online Search Engines (VLOSEs) as defined under the Digital Services Act

This does not change what is prohibited under Article 5. It changes which authority investigates and fines you. For most companies, national authorities still handle enforcement. If 836 is enacted, the largest AI providers and platform operators would face the European Commission directly.

Common questions

Are these AI bans already in force?▾
Yes. The Article 5 prohibitions have been in force since 2 February 2025 — six months after the EU AI Act entered into force on 1 August 2024. These are not future rules. If you are using any of the eight prohibited practices today, you are already in violation. Penalties under Article 99 apply: up to €35,000,000 or 7% of global annual turnover, whichever is higher.
Does it matter if my AI system only accidentally causes harm?▾
No. Article 5 prohibits both the 'objective' and the 'effect' of the harmful practice. If your AI system produces a prohibited outcome — even unintentionally — it falls within the ban. For subliminal manipulation and exploitation of vulnerabilities, the regulation targets systems with the 'objective, or the effect' of distorting behaviour. Intent is not a defence.
Is emotion AI completely banned?▾
Not everywhere. Article 5(1)(f) bans emotion recognition AI in the workplace and in educational institutions. It does not prohibit emotion recognition entirely — medical applications and safety-critical uses (e.g. detecting driver fatigue) are explicitly excluded. If you are considering emotion AI in any commercial setting, the location of deployment determines legality.
My company is based outside the EU — do these rules still apply to me?▾
Yes. Article 2 of the EU AI Act applies to any provider or deployer whose AI system is placed on the EU market or whose output is used in the EU — regardless of where the company is located. A US or UK company whose product is used by EU users, or whose AI output affects EU residents, is within scope.
Is real-time facial recognition by police always banned?▾
It is banned in publicly accessible spaces for law enforcement unless three narrow exceptions apply: searching for specific victims of abduction, trafficking or missing persons; preventing a specific and imminent terrorist threat; or identifying a suspect of a serious criminal offence (punishable by 4+ years). Each use requires prior judicial authorisation, a fundamental rights impact assessment, and registration in the EU database. There is no general authorisation — every deployment needs a specific, documented justification.

Related compliance guides

Fines & penalties explained (Art 99)Biometric AI obligationsEmotion recognition AI rulesSocial scoring in detailIs my AI high-risk? (Checklist)All enforcement dates

Know exactly where your system stands — before a regulator tells you

Article 5 is just one chapter. If your system isn't prohibited, it still might be high-risk — which means a different set of obligations: risk management, technical documentation, conformity assessment, human oversight, and registration before August 2026.

Describe your AI system in plain language. Regumatrix checks it against every article of the EU AI Act and returns your risk tier, Annex classification, the exact obligations that apply, and your fine exposure under Article 99. Eight sections. About 30 seconds.

Analyse my system free — 3 checks included →All compliance guides

8-section report · Article citations · ~30 seconds · No credit card