RegumatrixBeta
GuidesPathfinderAI RightsFreeAbout
Sign inGet Started Free

Reference

  • All Articles
  • Official Text ↗

Compliance Guides

  • Compliance Timeline
  • High-Risk Checklist
  • Healthcare AI
  • HR & Recruitment
  • Financial Services
  • GPAI / Foundation Models
  • View all guides →

Product

  • Risk Pathfinder
  • AI Rights Check
  • Get Started Free
  • About
  • Feedback
  • Contact

Legal

  • Privacy Policy
  • Terms

Regumatrix — AI compliance powered by Regulation (EU) 2024/1689

This tool is informational only and does not constitute legal advice.

Grounded in Regulation (EU) 2024/1689 · verified 4 Apr 2026
  1. Home
  2. /
  3. Compliance
  4. /
  5. Deepfakes & Synthetic Content
Article 50(2) — Provider markingArticle 50(4) — Deployer disclosure836 Feb 2027 transitional

Deepfakes & Synthetic AI Content: What Providers and Deployers Must Do

Article 50 creates two separate obligations for AI-generated and AI-manipulated content. Providers of synthetic content systems must embed machine-readable markers in all outputs — audio, image, video and text. Deployers of deep fake systems must disclose that content is artificially generated. Both obligations apply regardless of whether the underlying system is high-risk or minimal-risk, and regardless of whether it is open-source.

Penalty exposure

  • ▸Violations of Article 50 → Article 99(4): up to €15,000,000 or 3% of total worldwide annual turnover, whichever is higher; lower of the two thresholds for SMEs
  • ▸Both the provider machine-marking obligation (Art 50(2)) and the deployer deep fake disclosure obligation (Art 50(4)) are distinct — violations of each carry the same penalty ceiling
  • ▸Open-source licence does not provide any protection — Art 2(12) open-source exclusion explicitly does not apply where Art 50 is triggered

Does Article 50 apply to your AI system?

Regumatrix determines which Article 50 obligations apply to your specific system — chatbot disclosure, emotion recognition, machine marking, or deep fake disclosure — and maps your exact compliance requirements.

Check my AI system — 3 free analyses

The two distinct Article 50 obligations for synthetic content

Article 50 contains multiple distinct transparency obligations. Two of them — paragraphs (2) and (4) — specifically target synthetic and deep fake content. They operate at different stages of the AI supply chain and have different obligation-holders.

Article 50(2) — Provider obligation

Applies to: providers of AI systems (including GPAI) that generate synthetic audio, image, video or text

Obligation: ensure outputs are marked in machine-readable format, detectable as artificially generated or manipulated

Article 50(4) — Deployer obligation

Applies to: deployers of AI systems that generate or manipulate image, audio or video constituting a deep fake

Obligation: disclose to persons exposed that the content is artificially generated or manipulated

Provider machine-marking obligation — Article 50(2)

Any provider of an AI system — including a general-purpose AI model — that generates synthetic audio, image, video or text content must ensure the outputs carry a machine-readable mark indicating they were artificially generated or manipulated.

Technical requirements

Technical solutions must be effective, interoperable, robust, and reliable — insofar as technically feasible — taking into account: (a) specificities and limitations of different types of content; (b) implementation costs; (c) generally acknowledged state of the art. Watermarking, cryptographic provenance metadata, and C2PA-compliant manifest approaches all qualify.

Exceptions to Art 50(2) — three carve-outs

  • (a)Assistive function for standard editing — AI that assists with routine editing techniques (e.g., grammar checking, background removal) without substantially generating the content does not need to mark outputs
  • (b)No substantial alteration of input data — where the AI does not substantially alter the underlying input data or its semantics (e.g., format conversion, colour adjustment)
  • (c)Authorised law enforcement — systems authorised by law for the detection, prevention, investigation or prosecution of criminal offences

Scope: text included

Unlike the "deep fake" definition in Article 3(60) — which covers only image, audio and video — Article 50(2) expressly includes text. Large language model providers whose systems generate text for publication are within scope, subject to technical feasibility and the three exceptions above.

Deployer deep fake disclosure — Article 50(4)

Article 3(60) — Legal definition of "deep fake"

"AI-generated or manipulated image, audio or video content that resembles existing persons, objects, places, entities or events and would falsely appear to a person to be authentic or truthful."

Note: text is not included in the deep fake definition. Text published to inform the public is addressed by a separate clause in Article 50(4) — see below.

Deployers who put an AI system into service that generates or manipulates image, audio or video constituting a deep fake must ensure that persons exposed to that content are informed it is artificially generated or manipulated.

Standard scenario — full disclosure required

Any AI-generated image, audio or video resembling a real person, place or event that would appear authentic must carry a clear disclosure. This applies to deepfake face-swap videos, synthetic voice cloning of real individuals, AI-generated images of real people placed in fictional scenarios, and similar content.

Exception 1 — Law enforcement

Content generated by systems authorised by law for the detection, prevention, investigation or prosecution of criminal offences is exempt from the Art 50(4) disclosure obligation.

Exception 2 — Evidently artistic, satirical or fictional work

Where deep fake content forms part of an evidently artistic, creative, satirical, fictional or analogous work, the deployer's obligation is reduced — not removed. The deployer must disclose the existence of AI-generated or manipulated content in a manner that does not hamper the display or enjoyment of the work. The "evidently" threshold is important: if a reasonable viewer would not immediately recognise the satirical nature, standard full disclosure applies.

AI-generated text for public interest — separate clause in Article 50(4)

Article 50(4) also covers a distinct scenario beyond the deep fake definition: deployers operating AI systems that generate text published with the purpose of informing the public on matters of public interest must disclose the text is AI-generated or manipulated.

Who this targets

News publishers, public information agencies, government communications, and any organisation using AI to produce text presented to the public as factual information on public interest matters — including summaries of legislation, public health guidance, election information, and financial market commentary.

Two exceptions for text/public interest

  • (a)Law enforcement authorised use (same carve-out as deep fake imagery)
  • (b)AI-assisted content that has undergone human review and is published under editorial responsibility of a natural person or organisation — if the AI-generated or manipulated text has been substantively reviewed and an identified human takes editorial responsibility, the disclosure obligation does not apply

When and how disclosures must be made — Article 50(5)

Article 50(5) sets minimum requirements on timing, form and accessibility of the disclosures required under Article 50.

Timing

Disclosures must be provided at the latest at the time of the first interaction or first exposure to the content — not after, not on a separate page discoverable only with effort.

Form

Disclosures must be clear and distinguishable. They must not be buried in terms and conditions, hidden in small print, or obscured by design. The obligation is to make the disclosure perceptible to the person being exposed to the content.

Accessibility

Disclosures must conform to accessibility requirements applicable to the deployment context — meaning persons with visual, hearing or cognitive impairments must be able to receive the disclosure in a format accessible to them.

PROPOSAL — not yet enacted lawCOM(2025) 836 — Digital Omnibus

The European Commission's Digital Omnibus proposal (April 2025) includes two amendments affecting deepfake and synthetic content obligations.

Art 1(15) — Article 50(7) replacement: implementing act obligation removed

Current Article 50(7) requires the Commission to adopt implementing acts setting harmonised rules for technical solutions for machine-readable marking. Under 836, this is replaced with a softer mechanism: the AI Office will encourage and facilitate codes of practice. The Commission may assess whether codes are adequate and may adopt implementing acts if it finds them inadequate — but is no longer required to do so. This shifts from a mandatory harmonisation pathway to a codes-first approach with Commission backstop powers.

Art 1(30) — New Article 111(4): transitional for Art 50(2) machine-marking

Deadline: 2 February 2027(11 months away from current date)

Proposed new transitional provision: providers of AI systems — including general-purpose AI systems — that generate synthetic audio, image, video or text content and were placed on the market before 2 August 2026 must take the necessary steps to comply with Article 50(2) by 2 February 2027. This provides a 6-month grace period beyond the general application date (2 August 2026) for watermarking compliance for existing systems.

No changes are proposed under COM(2025) 837 for this topic.

Frequently Asked Questions

What is the difference between the Article 50(2) and Article 50(4) obligations?+

Article 50(2) targets providers: any provider of an AI system — including general-purpose AI — that generates synthetic audio, image, video or text must ensure outputs are marked in a machine-readable format, detectable as artificially generated or manipulated. Article 50(4) targets deployers: any deployer operating an AI system that generates or manipulates image, audio or video content constituting a deep fake must disclose that the content is artificially generated or manipulated. The distinction matters: 50(2) is a technical marking obligation on the provider side; 50(4) is a consumer-facing disclosure obligation on the deployer side. Both can apply to the same system if the same entity is both provider and deployer.

What counts as a 'deep fake' under the EU AI Act?+

Article 3(60) defines a deep fake as 'AI-generated or manipulated image, audio or video content that resembles existing persons, objects, places, entities or events and would falsely appear to a person to be authentic or truthful.' Note that text is not included in the definition of deep fake — it is covered by a separate clause in Article 50(4) addressing AI-generated text published to inform the public on matters of public interest.

Does the satire/art exception remove the obligation to disclose deep fakes?+

No. The satire/art exception in Article 50(4) reduces the obligation — it does not eliminate it. Where content forms part of an evidently artistic, creative, satirical, fictional or analogous work, the obligation is limited to disclosing the existence of such AI-generated or manipulated content in a manner that does not hamper the display or enjoyment of the work. This is a lighter disclosure than the standard deep fake disclosure, but disclosure is still required. A clearly satirical AI-generated video of a politician must still indicate it is AI-generated — it just does not need to be as prominent as a standard disclosure.

Does Article 50(2) machine-marking apply to text generated by a large language model?+

Yes, but with conditions. Article 50(2) applies to AI systems generating synthetic audio, image, video or text — it expressly includes text. However, the marking requirement is qualified: technical solutions must be effective, interoperable, robust and reliable 'insofar as this is technically feasible.' There are also three exceptions: (a) assistance function for standard editing techniques (minor editing tools); (b) no substantial alteration of input data or its semantics; and (c) authorised by law for detection of criminal offences. Most large-scale LLM providers generating text for publication will be required to ensure their outputs carry machine-readable provenance information.

Does the open-source exception in Article 2(12) apply to deep fake AI systems?+

No. Article 2(12) explicitly carves out Article 50 from the open-source exclusion. If you deploy an open-source model that generates deep fake content, Article 50(4) applies to you as a deployer. If you place an open-source model on the market that generates synthetic audio, image, video or text, Article 50(2) applies to you as a provider. The open-source licence does not remove these transparency obligations.

When does the Article 50(2) machine-marking obligation apply to existing AI systems under the COM(2025) 836 proposal?+

Under the proposed COM(2025) 836 transitional provision (Article 1(30), amending Article 111), providers of AI systems — including general-purpose AI systems — that generate synthetic audio, image, video or text and were placed on the market before 2 August 2026 would have until 2 February 2027 to comply with Article 50(2). This is currently a legislative proposal and has not yet been enacted into law.

Related Compliance Guides

AI Transparency Obligations — Full Article 50 Guide

All six Article 50 obligations: chatbot disclosure, emotion recognition, biometric categorisation, machine marking, deep fake, and text disclosure.

EU AI Act Scope & Exclusions (Article 2)

Open-source exclusion conditions, R&D carve-outs, and transitional provisions for existing systems.

Minimal-Risk AI — What Still Applies

Article 50 applies to minimal-risk AI too. What obligations remain when your system is not high-risk.

Article 5 Prohibited AI Practices

Some biometric and behavioural AI goes beyond Article 50 — into the absolute prohibitions.

General-Purpose AI Models (Chapter V)

GPAI models generating synthetic content must comply with both Chapter V and Article 50(2).

High-Risk AI Checklist

Classify your AI system — Article 50(2) and 50(4) apply regardless of whether you are also high-risk.

Determine your Article 50 obligations in 30 seconds

Regumatrix maps your AI system against all Article 50 obligations — identifying which of chatbot disclosure, machine-marking, emotion recognition disclosure, and deep fake disclosure apply to your specific use case, and calculates your penalty exposure. No credit card required.

Start free analysis