RegumatrixBeta
GuidesPathfinderAI RightsFreeAbout
Sign inGet Started Free

Reference

  • All Articles
  • Official Text ↗

Compliance Guides

  • Compliance Timeline
  • High-Risk Checklist
  • Healthcare AI
  • HR & Recruitment
  • Financial Services
  • GPAI / Foundation Models
  • View all guides →

Product

  • Risk Pathfinder
  • AI Rights Check
  • Get Started Free
  • About
  • Feedback
  • Contact

Legal

  • Privacy Policy
  • Terms

Regumatrix — AI compliance powered by Regulation (EU) 2024/1689

This tool is informational only and does not constitute legal advice.

Grounded in Regulation (EU) 2024/1689 · verified 4 Apr 2026
HomeComplianceAI Transparency Obligations
Limited-risk AI — Article 50€15M / 3% · 4 months away

AI Transparency Obligations: What Article 50 Requires

Article 50 sets four specific disclosure requirements for AI systems that interact with people, generate synthetic content, detect emotions, or create deepfakes. These obligations apply to systems that do not qualify as high-risk — meaning the full conformity assessment regime does not apply — but disclosure to affected persons is still mandatory.

Fine: €15,000,000 or 3% of global annual turnover — whichever is lower

Article 50 violations fall under Article 99(4) — the tier-2 penalty for high-risk and other obligation breaches. For SMEs and startups, the lower of the two figures applies (Art 99(6)). Enforcement is by national market surveillance authorities. Art 99(4)

Not sure which Art 50 obligations apply to your system? Regumatrix identifies which of the four transparency duties apply, flags any exceptions, and returns your exact obligation set in 30 seconds. Check your system free →

The four transparency obligations

Each obligation has a different scope and a different subject — some apply to providers, some to deployers, and some to both. Read each one carefully: it is common to discover that your system triggers more than one. Art 50

1

Chatbot and AI interaction disclosure

Art 50(1)

Who must comply: providers

If your AI system is designed to interact directly with natural persons — chatbots, voice assistants, conversational agents — you must ensure those persons know they are interacting with an AI. The disclosure must be clear and distinguishable and provided at the latest at the time of first interaction.

When the obligation does not apply:

  • Where it is obvious to a reasonably informed and observant person that they are talking to an AI, given the circumstances and context of use
  • AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences — unless those systems are available to the public for reporting crimes
2

Machine-readable marking of synthetic content

Art 50(2)

Who must comply: providers

Providers of AI systems — including GPAI models — that generate synthetic audio, image, video, or text content must ensure outputs are marked in a machine-readable format and detectable as artificially generated or manipulated.

The law requires the technical solution to be effective, interoperable, robust, and reliable, as far as technically feasible. Cost of implementation and the state of the art are taken into account.

When the obligation does not apply:

  • AI systems performing an assistive function for standard editing that do not substantially alter the input data or its semantics (e.g. spell-check, colour correction, grammar tools)
  • Systems authorised by law for criminal investigation and detection purposes
3

Emotion recognition and biometric categorisation notice

Art 50(3)

Who must comply: deployers

If you deploy an emotion recognition system or a biometric categorisation system, you must inform the natural persons exposed to it that the system is operating. You must also process any personal data in accordance with GDPR (Regulation 2016/679) and, where applicable, the Law Enforcement Directive (2016/680).

Note that this is a deployer obligation — the provider does not bear it, but the business or organisation actually running the emotion recognition tool does.

When the obligation does not apply:

  • AI systems for biometric categorisation and emotion recognition that are permitted by law to detect, prevent, or investigate criminal offences, subject to safeguards for third-party rights
4

Deepfake disclosure and AI-generated text labelling

Art 50(4)

Who must comply: deployers

Deployers of an AI system that generates or manipulates image, audio, or video content constituting a deepfake must disclose that the content has been artificially generated or manipulated.

A separate rule applies to AI-generated text on matters of public interest: deployers who publish such text must disclose that it was artificially generated or manipulated.

Three exceptions:

  • Law enforcement: use authorised by law for criminal detection, prevention, and investigation
  • Art and satire: where content forms part of an evidently artistic, creative, satirical, or fictional work — but only reduced, not removed: you must still disclose in a way that does not hamper the work
  • Editorial control exception for text: AI-generated text published on public interest matters does not need disclosure if it underwent human review and a natural or legal person bears editorial responsibility for the publication

Timing and format — Article 50(5)

All Art 50 disclosures must be provided in a clear and distinguishable manner at the latest at the time of the first interaction or first exposure. Disclosures must also conform to applicable accessibility requirements — you cannot satisfy the obligation with text that is inaccessible to users with disabilities. Art 50(5)

What Article 50 does not affect

Article 50(6) is explicit: the four transparency obligations do not affect the requirements in Chapter III and are without prejudice to other EU or national transparency laws.

In plain terms: if your AI system is also classified as high-risk under Annex III (for example, a biometric categorisation system used in employment), Art 50 disclosures are in addition to — not instead of — the full Chapter III obligations: risk management, technical documentation, conformity assessment, registration, human oversight, and all other Art 9–15 requirements.

Similarly, Art 50 does not override GDPR. The emotion recognition and biometric categorisation disclosure in Art 50(3) must be accompanied by lawful data processing under GDPR.

PROPOSAL — not yet enacted lawCOM(2025) 836 · Art 1 pt. 15

836 proposes removing the mandatory watermarking implementing act

Under the current Article 50(7), the Commission was empowered — and implicitly expected — to adopt an implementing act laying down harmonised technical rules for synthetic content watermarking and marking. COM(2025) 836 proposes replacing this with a softer approach:

  • The AI Office encourages and facilitates codes of practice for watermarking and content labelling — the same opt-in model used for GPAI
  • The Commission may assess whether adherence to those codes is adequate — it is no longer required to impose harmonised rules automatically
  • If codes are found inadequate, the Commission may still adopt an implementing act with harmonised rules — but this becomes a fallback, not the default path

What this means in practice: if 836 is enacted, the timeline for mandatory harmonised watermarking standards becomes less certain. The Art 50(2) provider obligation to mark synthetic content in a machine-readable format is unchanged — what changes is how technical standards for that marking are developed and enforced.

No changes are proposed under COM(2025) 837 for this topic.

Five grey areas where providers and deployers frequently get this wrong

  • ·Assuming the "it's obvious it's AI" exception applies broadly — the test is a reasonably informed and circumspect person, not a tech-savvy one
  • ·Treating the Art 50(4) satire exception as a full exemption — the exception reduces the obligation, it does not remove it
  • ·AI writing assistants used by journalists — if the text is published on matters of public interest and no editorial responsibility clause is in place, Art 50(4) applies
  • ·Emotion recognition in HR tools — Art 50(3) deployer disclosure applies even where the system is also subject to high-risk obligations under Annex III
  • ·Machine-readable marking treated as optional — Art 50(2) requires a technical implementation from the provider, not just a visible label
Check which Art 50 obligations apply to your system →

Frequently asked questions

Does Article 50 apply to ALL AI systems or only to specific ones?

Article 50 applies to specific categories of AI and specific use cases — not to every AI system. Each of the four obligations has its own scope. Art 50(1) applies to AI systems designed to interact directly with natural persons (chatbots, voice assistants). Art 50(2) applies to AI systems that generate synthetic audio, image, video, or text. Art 50(3) applies to deployers of emotion recognition or biometric categorisation systems. Art 50(4) applies to deployers of deepfake-generating systems and deployers publishing AI-generated text on matters of public interest. A spam filter, a recommendation engine, or an internal data analytics tool does not fall under any of these obligations.

Does Article 50(1) require a permanent banner on every chatbot?

Not necessarily a banner — but you must ensure users know they are interacting with an AI system. The disclosure must be clear and distinguishable, and provided at the latest at the time of the first interaction. How you disclose is not prescribed. A text notice, a visual indicator, or a verbal announcement all work, provided the user is actually informed. The disclosure is not required where this is obvious to a reasonably informed user from the context — for example, a clearly labelled 'AI chatbot' on a product page. The law enforcement exception is narrow: it only applies to systems authorised by law for those specific purposes and not available to the public for reporting crimes.

I use AI to generate marketing images. Does Art 50(2) apply to me?

Art 50(2) is a provider obligation — it requires the provider of the AI system to ensure the outputs are marked in a machine-readable format as artificially generated. If you are using an AI image tool built by someone else, the marking obligation sits primarily with that provider, not you. However, if you are deploying a deepfake — video, audio, or image content that depicts real identifiable people — Art 50(4) applies to you as a deployer and requires you to disclose that the content has been artificially generated or manipulated. The satire/art exception in Art 50(4) is narrow: even in artistic works, you must still disclose the existence of AI-generated content — just in a way that does not hamper the work.

Does the deepfake disclosure requirement have exceptions?

Yes — three. First, law enforcement: systems authorised by law for criminal investigation and detection are exempt. Second, artistic works: for satirical, fictional, or creative content, the disclosure obligation is reduced — you must still disclose the existence of AI-generated content, but you can do so in a way that does not hamper the display or enjoyment of the work. Third, for AI-generated text published on matters of public interest (journalism, public commentary), the disclosure obligation does not apply if the content has undergone human review or editorial control AND a natural or legal person bears editorial responsibility for publication.

What does Art 50 NOT cover — what is the 'limited risk' tier?

Article 50 is often called the 'limited risk' transparency tier, but the regulation does not use that label formally. What Art 50 does is require disclosure where there is a risk of users being deceived or not knowing they are interacting with AI or AI-generated content. It does not require risk management systems, technical documentation, conformity assessment, registration in the EU database, or CE marking. Those obligations sit in Chapter III (Arts 9–21) and only apply to high-risk AI systems. Article 50(6) explicitly confirms that Art 50 does not affect Chapter III requirements and is without prejudice to other Union transparency obligations.

Related guides

Biometric AI systemsProhibited AI practices (Art 5)Deepfakes & synthetic contentAI deployer obligationsEU AI Act + GDPR interactionEU AI Act fines & penalties

Which Article 50 obligations apply to your system?

Regumatrix checks your AI system against Article 50 and every other relevant article of the EU AI Act. You get your risk tier, the specific obligations that fire for your system — including which of the four Art 50 duties apply and which exceptions might help — plus your fine exposure. Full cited report in around 30 seconds.

  • ✓ Which of the four Art 50 obligations apply to your system
  • ✓ Exceptions and carve-outs checked for each obligation
  • ✓ High-risk classification check (does Chapter III also apply?)
  • ✓ Fine exposure under Article 99(4)
  • ✓ Full cited report with Article references — no credit card, ~30 seconds
Get your free compliance report →See prohibited practices