Article 5(1)(c) of the EU AI Act prohibits AI-based social scoring by any operator — public or private. There are no exceptions. Understanding the two-limb test is essential to identifying whether your system is in scope.
Prohibited practice — Art 5(1)(c) · No exceptions
Unlike some other provisions, Article 5(1)(c) carries no exemptions and no derogation procedure. Maximum penalty: €35,000,000 or 7% of global annual turnover — whichever is higher — under Art 99(3). For SMEs, the lower of the two figures applies under Art 99(6).
Does your scoring system pass the two-limb test? Regumatrix classifies your AI and tells you exactly where you stand in under a minute →
The prohibition applies when two preconditions are met — and then the AI system's outputs lead to treatment matching either of the two limbs below.
Both conditions must apply. One-off assessments or systems operating purely within a single decision context may fall outside the definition.
The social score leads to detrimental or unfavourable treatment of natural persons in social contexts unrelated to the contexts in which the data was originally generated or collected.
Example: behaviour data collected on a social media platform is used to restrict access to housing, employment, or financial services.
The social score leads to detrimental treatment that is unjustified or disproportionate to the social behaviour or its gravity.
Example: a minor infraction in one area causes permanent exclusion from unrelated services, or consequences massively exceed the severity of the underlying behaviour.
Either limb alone triggers the prohibition. A system that leads to unrelated-context treatment (limb i) is banned even if the treatment is proportionate. A system that leads to disproportionate treatment within the same context (limb ii) is banned even if it never transfers data elsewhere.
Cross-context score transfer
A social conduct score compiled by a government agency used to restrict access to public transport, banking services, or commercial applications.
Platform-to-unrelated-service restriction
A platform restricts users' access to a third-party marketplace or employment service based on an accumulated behavioural score from within the platform.
Insurance score used for employment
An insurer's actuarial behaviour score shared with HR departments to influence hiring, promotion, or employment decisions — unrelated context, same data.
Disproportionate consequence scoring
A delivery platform deactivates accounts permanently based on a minor infraction score when the consequences are grossly disproportionate to the scored behaviour.
Credit score for credit decisions
A financial institution uses a credit score — derived from financial behaviour — only for credit approval decisions. Same context, proportionate treatment.
Platform reputation within same service
A marketplace rates sellers based on fulfilment performance and shows that rating to buyers on the same platform. The score stays within the original context.
Single-context proportionate scoring
A ride-sharing app rates drivers on driving behaviour and uses that score only to assign rides. Single context, proportionate treatment, no profile accumulation.
Credit scoring occupies a grey area. A score used only for credit decisions — based on financial behaviour, for financial outcomes — is within-context and generally does not trigger Article 5(1)(c).
The prohibition is engaged when:
Common warning signals:
No changes are proposed under COM(2025) 836 or COM(2025) 837 for the social scoring prohibition under Art 5(1)(c).
No. Article 5(1)(c) explicitly covers both public authorities and private operators — the prohibition applies to 'the placing on the market, the putting into service or the use of AI systems' without limiting who can operate them. Private platforms, insurers, employers, landlords, and any other entity that evaluates and classifies natural persons based on social behaviour in a way that leads to cross-context or disproportionate treatment are all covered. This is a deliberate design choice: the EU AI Act targets the practice, not the sector.
Not automatically. Credit scoring based on financial behaviour used only for credit decisions — in the same context as data collection — generally does not meet the two-limb test. However, if a credit score is used to restrict access to unrelated services (housing, employment, public transport, insurance) that triggers limb (i): detrimental treatment in a context unrelated to where the data was collected. Similarly, if a credit score is used disproportionately to actual financial behaviour — for example, denying all services based on a single missed payment — limb (ii) is engaged. The key is cross-context transfer or disproportionality, not the existence of scoring itself.
Generally yes. The prohibition targets longitudinal tracking and evaluation — systems that accumulate and consolidate behavioural data over time to build a profile and generate a score. A single on-the-spot assessment based on one data point does not fit the prohibition's definition. However, if the system is designed to build an evolving score from repeated interactions or observations, even if individual decisions look like point-in-time assessments, the overall system may still be social scoring.
National market surveillance authorities (Arts 74–84) are the primary enforcement bodies. The EU AI Office has a central coordination role, particularly for violations involving GPAI models or systemic cross-border issues. Prohibited practice violations under Art 5 are subject to the highest penalty tier under Art 99(3): up to €35,000,000 or 7% of global annual turnover, whichever is higher. For SMEs and start-ups, Art 99(6) applies the lower figure.
Only if it evaluates users' social behaviour over time AND uses that evaluation to treat them adversely in an unrelated context or disproportionately to their actual behaviour. A product recommendation engine that personalises content within the same service context — based on purchase history for purchase recommendations, for example — does not meet the two-limb test. But a platform that builds a 'trustworthiness profile' of users and then uses that profile to restrict access to other services on the platform or third-party services would be more likely classified as social scoring.
Prohibited AI Practices — Article 5
All 8 banned uses explained with the €35M/7% penalty breakdown
EU AI Act Fines and Penalties
Four penalty tiers, SME inverse cap, and how fines are calculated
Biometric AI Systems
Annex III HR-1 high-risk classification for biometric systems
Fundamental Rights Impact Assessment
When FRIAs are required and how to conduct one under Art 27
EU AI Act Overview
Complete structure of the EU AI Act — all four risk categories
High-Risk AI Checklist
Classify your system against all 8 Annex III domains
Regumatrix evaluates your AI system against the Article 5(1)(c) two-limb test and every other EU AI Act provision — returning the exact risk classification and the steps required to comply or redesign.
Get started free