Article 50 sets four specific disclosure requirements for AI systems that interact with people, generate synthetic content, detect emotions, or create deepfakes. These obligations apply to systems that do not qualify as high-risk — meaning the full conformity assessment regime does not apply — but disclosure to affected persons is still mandatory.
Fine: €15,000,000 or 3% of global annual turnover — whichever is lower
Article 50 violations fall under Article 99(4) — the tier-2 penalty for high-risk and other obligation breaches. For SMEs and startups, the lower of the two figures applies (Art 99(6)). Enforcement is by national market surveillance authorities. Art 99(4)
Not sure which Art 50 obligations apply to your system? Regumatrix identifies which of the four transparency duties apply, flags any exceptions, and returns your exact obligation set in 30 seconds. Check your system free →
Each obligation has a different scope and a different subject — some apply to providers, some to deployers, and some to both. Read each one carefully: it is common to discover that your system triggers more than one. Art 50
Who must comply: providers
If your AI system is designed to interact directly with natural persons — chatbots, voice assistants, conversational agents — you must ensure those persons know they are interacting with an AI. The disclosure must be clear and distinguishable and provided at the latest at the time of first interaction.
When the obligation does not apply:
Who must comply: providers
Providers of AI systems — including GPAI models — that generate synthetic audio, image, video, or text content must ensure outputs are marked in a machine-readable format and detectable as artificially generated or manipulated.
The law requires the technical solution to be effective, interoperable, robust, and reliable, as far as technically feasible. Cost of implementation and the state of the art are taken into account.
When the obligation does not apply:
Who must comply: deployers
If you deploy an emotion recognition system or a biometric categorisation system, you must inform the natural persons exposed to it that the system is operating. You must also process any personal data in accordance with GDPR (Regulation 2016/679) and, where applicable, the Law Enforcement Directive (2016/680).
Note that this is a deployer obligation — the provider does not bear it, but the business or organisation actually running the emotion recognition tool does.
When the obligation does not apply:
Who must comply: deployers
Deployers of an AI system that generates or manipulates image, audio, or video content constituting a deepfake must disclose that the content has been artificially generated or manipulated.
A separate rule applies to AI-generated text on matters of public interest: deployers who publish such text must disclose that it was artificially generated or manipulated.
Three exceptions:
All Art 50 disclosures must be provided in a clear and distinguishable manner at the latest at the time of the first interaction or first exposure. Disclosures must also conform to applicable accessibility requirements — you cannot satisfy the obligation with text that is inaccessible to users with disabilities. Art 50(5)
Article 50(6) is explicit: the four transparency obligations do not affect the requirements in Chapter III and are without prejudice to other EU or national transparency laws.
In plain terms: if your AI system is also classified as high-risk under Annex III (for example, a biometric categorisation system used in employment), Art 50 disclosures are in addition to — not instead of — the full Chapter III obligations: risk management, technical documentation, conformity assessment, registration, human oversight, and all other Art 9–15 requirements.
Similarly, Art 50 does not override GDPR. The emotion recognition and biometric categorisation disclosure in Art 50(3) must be accompanied by lawful data processing under GDPR.
Under the current Article 50(7), the Commission was empowered — and implicitly expected — to adopt an implementing act laying down harmonised technical rules for synthetic content watermarking and marking. COM(2025) 836 proposes replacing this with a softer approach:
What this means in practice: if 836 is enacted, the timeline for mandatory harmonised watermarking standards becomes less certain. The Art 50(2) provider obligation to mark synthetic content in a machine-readable format is unchanged — what changes is how technical standards for that marking are developed and enforced.
No changes are proposed under COM(2025) 837 for this topic.
Article 50 applies to specific categories of AI and specific use cases — not to every AI system. Each of the four obligations has its own scope. Art 50(1) applies to AI systems designed to interact directly with natural persons (chatbots, voice assistants). Art 50(2) applies to AI systems that generate synthetic audio, image, video, or text. Art 50(3) applies to deployers of emotion recognition or biometric categorisation systems. Art 50(4) applies to deployers of deepfake-generating systems and deployers publishing AI-generated text on matters of public interest. A spam filter, a recommendation engine, or an internal data analytics tool does not fall under any of these obligations.
Not necessarily a banner — but you must ensure users know they are interacting with an AI system. The disclosure must be clear and distinguishable, and provided at the latest at the time of the first interaction. How you disclose is not prescribed. A text notice, a visual indicator, or a verbal announcement all work, provided the user is actually informed. The disclosure is not required where this is obvious to a reasonably informed user from the context — for example, a clearly labelled 'AI chatbot' on a product page. The law enforcement exception is narrow: it only applies to systems authorised by law for those specific purposes and not available to the public for reporting crimes.
Art 50(2) is a provider obligation — it requires the provider of the AI system to ensure the outputs are marked in a machine-readable format as artificially generated. If you are using an AI image tool built by someone else, the marking obligation sits primarily with that provider, not you. However, if you are deploying a deepfake — video, audio, or image content that depicts real identifiable people — Art 50(4) applies to you as a deployer and requires you to disclose that the content has been artificially generated or manipulated. The satire/art exception in Art 50(4) is narrow: even in artistic works, you must still disclose the existence of AI-generated content — just in a way that does not hamper the work.
Yes — three. First, law enforcement: systems authorised by law for criminal investigation and detection are exempt. Second, artistic works: for satirical, fictional, or creative content, the disclosure obligation is reduced — you must still disclose the existence of AI-generated content, but you can do so in a way that does not hamper the display or enjoyment of the work. Third, for AI-generated text published on matters of public interest (journalism, public commentary), the disclosure obligation does not apply if the content has undergone human review or editorial control AND a natural or legal person bears editorial responsibility for publication.
Article 50 is often called the 'limited risk' transparency tier, but the regulation does not use that label formally. What Art 50 does is require disclosure where there is a risk of users being deceived or not knowing they are interacting with AI or AI-generated content. It does not require risk management systems, technical documentation, conformity assessment, registration in the EU database, or CE marking. Those obligations sit in Chapter III (Arts 9–21) and only apply to high-risk AI systems. Article 50(6) explicitly confirms that Art 50 does not affect Chapter III requirements and is without prejudice to other Union transparency obligations.
Regumatrix checks your AI system against Article 50 and every other relevant article of the EU AI Act. You get your risk tier, the specific obligations that fire for your system — including which of the four Art 50 duties apply and which exceptions might help — plus your fine exposure. Full cited report in around 30 seconds.