RegumatrixBeta
GuidesPathfinderAI RightsFreeAbout
Sign inGet Started Free

Reference

  • All Articles
  • Official Text ↗

Compliance Guides

  • Compliance Timeline
  • High-Risk Checklist
  • Healthcare AI
  • HR & Recruitment
  • Financial Services
  • GPAI / Foundation Models
  • View all guides →

Product

  • Risk Pathfinder
  • AI Rights Check
  • Get Started Free
  • About
  • Feedback
  • Contact

Legal

  • Privacy Policy
  • Terms

Regumatrix — AI compliance powered by Regulation (EU) 2024/1689

This tool is informational only and does not constitute legal advice.

Grounded in Regulation (EU) 2024/1689 · verified 4 Apr 2026
HomeComplianceSocial Scoring AI
ProhibitedIn force since February 2025

Social Scoring AI: Banned Under Article 5 for Both Private and Government Operators

Article 5(1)(c) of the EU AI Act prohibits AI-based social scoring by any operator — public or private. There are no exceptions. Understanding the two-limb test is essential to identifying whether your system is in scope.

Prohibited practice — Art 5(1)(c) · No exceptions

Unlike some other provisions, Article 5(1)(c) carries no exemptions and no derogation procedure. Maximum penalty: €35,000,000 or 7% of global annual turnover — whichever is higher — under Art 99(3). For SMEs, the lower of the two figures applies under Art 99(6).

Does your scoring system pass the two-limb test? Regumatrix classifies your AI and tells you exactly where you stand in under a minute →

The two-limb test — Art 5(1)(c)

The prohibition applies when two preconditions are met — and then the AI system's outputs lead to treatment matching either of the two limbs below.

Precondition — the AI system must:

  1. Evaluate or classify natural persons or groups over a certain period of time (longitudinal tracking — not a single isolated assessment)
  2. Based on their social behaviour or known, inferred, or predicted personal or personality characteristics

Both conditions must apply. One-off assessments or systems operating purely within a single decision context may fall outside the definition.

Limb (i)

Unrelated context treatment

The social score leads to detrimental or unfavourable treatment of natural persons in social contexts unrelated to the contexts in which the data was originally generated or collected.

Example: behaviour data collected on a social media platform is used to restrict access to housing, employment, or financial services.

Limb (ii)

Unjustified or disproportionate treatment

The social score leads to detrimental treatment that is unjustified or disproportionate to the social behaviour or its gravity.

Example: a minor infraction in one area causes permanent exclusion from unrelated services, or consequences massively exceed the severity of the underlying behaviour.

Either limb alone triggers the prohibition. A system that leads to unrelated-context treatment (limb i) is banned even if the treatment is proportionate. A system that leads to disproportionate treatment within the same context (limb ii) is banned even if it never transfers data elsewhere.

Social scoring in practice

These practices are likely prohibited

Cross-context score transfer

A social conduct score compiled by a government agency used to restrict access to public transport, banking services, or commercial applications.

Platform-to-unrelated-service restriction

A platform restricts users' access to a third-party marketplace or employment service based on an accumulated behavioural score from within the platform.

Insurance score used for employment

An insurer's actuarial behaviour score shared with HR departments to influence hiring, promotion, or employment decisions — unrelated context, same data.

Disproportionate consequence scoring

A delivery platform deactivates accounts permanently based on a minor infraction score when the consequences are grossly disproportionate to the scored behaviour.

These practices are generally not prohibited

Credit score for credit decisions

A financial institution uses a credit score — derived from financial behaviour — only for credit approval decisions. Same context, proportionate treatment.

Platform reputation within same service

A marketplace rates sellers based on fulfilment performance and shows that rating to buyers on the same platform. The score stays within the original context.

Single-context proportionate scoring

A ride-sharing app rates drivers on driving behaviour and uses that score only to assign rides. Single context, proportionate treatment, no profile accumulation.

The credit scoring nuance — a closer look

Credit scoring occupies a grey area. A score used only for credit decisions — based on financial behaviour, for financial outcomes — is within-context and generally does not trigger Article 5(1)(c).

The prohibition is engaged when:

  • A credit score is shared with or sold to employers, landlords, or other non-financial service providers (limb i)
  • A credit score blocks access to essential services in a way grossly disproportionate to the underlying financial incident (limb ii)
  • High-risk AI classification under Annex III point 5(b) applies to AI systems assessing creditworthiness — separate from the prohibition but requiring Chapter III obligations

Is your scoring system in the grey area?

Common warning signals:

  • Your system aggregates user behaviour data across sessions or time periods
  • Scores or classifications influence decisions outside the original data context
  • Users can be systematically disadvantaged based on past behaviour in one area
  • The system outputs are used by third parties for unrelated decisions
  • Treatment varies significantly from minor to major consequences with little gradation
Get a classification for your system

No changes are proposed under COM(2025) 836 or COM(2025) 837 for the social scoring prohibition under Art 5(1)(c).

Frequently asked questions

Does Article 5(1)(c) only apply to government social credit systems?

No. Article 5(1)(c) explicitly covers both public authorities and private operators — the prohibition applies to 'the placing on the market, the putting into service or the use of AI systems' without limiting who can operate them. Private platforms, insurers, employers, landlords, and any other entity that evaluates and classifies natural persons based on social behaviour in a way that leads to cross-context or disproportionate treatment are all covered. This is a deliberate design choice: the EU AI Act targets the practice, not the sector.

Is credit scoring social scoring under the EU AI Act?

Not automatically. Credit scoring based on financial behaviour used only for credit decisions — in the same context as data collection — generally does not meet the two-limb test. However, if a credit score is used to restrict access to unrelated services (housing, employment, public transport, insurance) that triggers limb (i): detrimental treatment in a context unrelated to where the data was collected. Similarly, if a credit score is used disproportionately to actual financial behaviour — for example, denying all services based on a single missed payment — limb (ii) is engaged. The key is cross-context transfer or disproportionality, not the existence of scoring itself.

Does 'over a certain period of time' mean a single-event assessment is outside the prohibition?

Generally yes. The prohibition targets longitudinal tracking and evaluation — systems that accumulate and consolidate behavioural data over time to build a profile and generate a score. A single on-the-spot assessment based on one data point does not fit the prohibition's definition. However, if the system is designed to build an evolving score from repeated interactions or observations, even if individual decisions look like point-in-time assessments, the overall system may still be social scoring.

Who enforces Article 5(1)(c) violations?

National market surveillance authorities (Arts 74–84) are the primary enforcement bodies. The EU AI Office has a central coordination role, particularly for violations involving GPAI models or systemic cross-border issues. Prohibited practice violations under Art 5 are subject to the highest penalty tier under Art 99(3): up to €35,000,000 or 7% of global annual turnover, whichever is higher. For SMEs and start-ups, Art 99(6) applies the lower figure.

Can a recommendation system be classified as social scoring?

Only if it evaluates users' social behaviour over time AND uses that evaluation to treat them adversely in an unrelated context or disproportionately to their actual behaviour. A product recommendation engine that personalises content within the same service context — based on purchase history for purchase recommendations, for example — does not meet the two-limb test. But a platform that builds a 'trustworthiness profile' of users and then uses that profile to restrict access to other services on the platform or third-party services would be more likely classified as social scoring.

Related guides

Prohibited AI Practices — Article 5

All 8 banned uses explained with the €35M/7% penalty breakdown

EU AI Act Fines and Penalties

Four penalty tiers, SME inverse cap, and how fines are calculated

Biometric AI Systems

Annex III HR-1 high-risk classification for biometric systems

Fundamental Rights Impact Assessment

When FRIAs are required and how to conduct one under Art 27

EU AI Act Overview

Complete structure of the EU AI Act — all four risk categories

High-Risk AI Checklist

Classify your system against all 8 Annex III domains

Know whether your scoring system is prohibited

Regumatrix evaluates your AI system against the Article 5(1)(c) two-limb test and every other EU AI Act provision — returning the exact risk classification and the steps required to comply or redesign.

Get started free