RegumatrixBeta
GuidesPathfinderAI RightsFreeAbout
Sign inGet Started Free

Reference

  • All Articles
  • Official Text ↗

Compliance Guides

  • Compliance Timeline
  • High-Risk Checklist
  • Healthcare AI
  • HR & Recruitment
  • Financial Services
  • GPAI / Foundation Models
  • View all guides →

Product

  • Risk Pathfinder
  • AI Rights Check
  • Get Started Free
  • About
  • Feedback
  • Contact

Legal

  • Privacy Policy
  • Terms

Regumatrix — AI compliance powered by Regulation (EU) 2024/1689

This tool is informational only and does not constitute legal advice.

Grounded in Regulation (EU) 2024/1689 · verified 4 Apr 2026
HomeComplianceCOM(2025) 836
PROPOSAL — not yet enacted lawPublished 19 November 2025

COM(2025) 836 — Digital Omnibus on AI: Every Proposed Change Explained

In November 2025, the European Commission proposed 33 amendments to the EU AI Act under the name “Digital Omnibus on AI.” If enacted, the changes delay high-risk compliance deadlines, create relief for SMEs and a new SMC category, streamline conformity assessment, and hand the AI Office exclusive enforcement powers over GPAI-based AI systems and digital platforms.

Why this matters for your planning right now

The proposal is in trilogue as of early 2026. Until it is enacted, current AI Act deadlines stand. Your general obligations apply from 2 August 2026 (4 months away). If COM 836 is enacted, high-risk Chapter III obligations could shift to a fallback no later than 2 December 2027 or 2 August 2028 — but only if the Commission decision mechanism fails to trigger earlier. Do not treat the proposed delays as permission to pause compliance work.

Not sure which obligations apply to your system under current law?

Regumatrix checks your AI system against every article of the EU AI Act — risk tier, Annex classification, the exact obligations that apply today, and your fine exposure under Article 99. Takes ~30 seconds.

Check your obligations free
PROPOSAL

1. New compliance deadlines

The most commercially significant change: Chapter III high-risk obligations would no longer trigger on a fixed calendar date. They would depend on a Commission decision confirming adequate support measures — harmonised standards, common specifications, and guidelines — with hard fallback dates if that decision is not made. Aug 2026 for prohibited practices (Art 5) and transparency obligations (Art 50) is not affected.

Annex III systems — deadline shifts to 2 December 2027 (fallback)

Chapter III Sections 1, 2, and 3 — every high-risk obligation from risk management through post-market monitoring — would apply 6 months after the Commission adopts a decision confirming adequate harmonised standards and guidelines. If no decision is adopted in time, the hard fallback is 2 December 2027 (20 months away). This covers Annex III systems: AI in biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, and justice.

Art 113

836 amendment: Art 1 point 31 — adds Art 113 third paragraph point (d)(i)

Annex I Section A systems — deadline shifts to 2 August 2028 (fallback)

AI systems embedded in products covered by Annex I Section A Union harmonisation legislation — medical devices, machinery, civil aviation, motor vehicles, toys — would follow Chapter III 12 months after the Commission decision. Hard fallback: 2 August 2028 (28 months away). Note: only Section A systems use this route; Section B aviation and automotive systems have a separate pathway under the proposal.

Art 113

836 amendment: Art 1 point 31 — adds Art 113 third paragraph point (d)(ii)

Watermarking grace period — pre-existing systems have until 2 February 2027

Providers of AI systems that generate synthetic audio, image, video, or text content and that were already on the market before 2 August 2026 would have until 2 February 2027 (10 months away) to comply with Article 50(2) marking obligations. This applies to the generation-side obligation — AI systems that output synthetic content must mark it. The general Art 50 application date of August 2026 still applies to new systems placed on the market from that date.

Art 50(2)

836 amendment: Art 1 point 30 — adds Art 111(4) transitional provision

Public authority deployers — hard deadline 2 August 2030

Providers and deployers of high-risk AI systems intended for use by public authorities — government agencies, public hospitals, public courts, public universities — would have until 2 August 2030 (53 months away) to comply with all Chapter III obligations. This is a hard deadline with no Commission decision mechanism; it is not subject to the earlier fallback dates.

Art 111

836 amendment: Art 1 point 30 — amends Art 111(2)

PROPOSAL

2. New SMC category and SME/SMC relief

COM 836 introduces 'small mid-cap enterprises' (SMCs) as a new protected category in the AI Act and extends every SME privilege to cover SMCs as well. If you are an SME today, your benefits are unchanged. If your company recently grew past the SME threshold, the SMC category may restore equivalent relief.

SMC definition added to Article 3

A new Article 3(14b) would define SMCs by reference to Commission Recommendation (EU) 2025/1099. SMCs sit above SMEs in company size but below large corporations. The definition is added alongside the existing SME definition in Article 3(14a), which references Recommendation 2003/361/EC.

Art 3

836 amendment: Art 1 point 3 — inserts Art 3(14a) and (14b)

Simplified technical documentation form for SME/SMC

SMCs and SMEs (including start-ups) could provide the Annex IV technical documentation in a simplified form. The Commission must create that form. Notified bodies would be required to accept it for conformity assessments — preventing them from demanding full-format documentation from smaller providers.

Art 11

836 amendment: Art 1 point 8 — amends Art 11(1)

Simplified QMS extended from microenterprises to all SMEs

Under current law, only microenterprises qualify for the simplified quality management system option under Article 63. COM 836 extends this to all SMEs, including start-ups. The Commission must develop guidelines specifying which QMS elements may be met in simplified form — without reducing the level of protection required for the AI system itself.

Art 63

836 amendment: Art 1 point 21 — replaces Art 63(1)

SMC fine cap — lower of percentage or fixed amount

Article 99(6) currently applies the lower-of-percentage-or-fixed-amount cap only to SMEs. COM 836 extends this to SMCs. For a violation carrying the €35,000,000 or 7% maximum fine under Article 99(3), an SMC whose 7% figure is lower than €35M pays the lower number. The proportionality instruction in Article 99(1) — requiring Member States to consider SME and SMC economic viability when imposing penalties — is also extended to SMCs.

Art 99(6)

836 amendment: Art 1 point 29 — amends Art 99(1) and (6)

PROPOSAL

3. Regulatory burden reduced

Four mandatory obligations are removed or replaced with softer alternatives — binding rules become guidance, codes of practice, or flexibility for providers to structure their own approach.

AI literacy obligation removed — now government encouragement only

Article 4 currently places a binding obligation on providers and deployers to ensure adequate AI literacy for their staff. COM 836 replaces this entirely: the Commission and Member States shall encourage providers and deployers to take literacy measures, but there is no longer a direct legal requirement on companies. The sanction risk for this specific obligation disappears if enacted.

Art 4

836 amendment: Art 1 point 4 — replaces Art 4

EU database registration removed for Article 6(3) systems

Article 49(2) requires providers who classify their Annex III system as non-high-risk under Article 6(3) to register it in the EU database anyway. COM 836 deletes this paragraph entirely. If your system qualifies for the Article 6(3) non-high-risk derogation — meaning it does not materially affect outcomes for natural persons — you would no longer need to register it in the EU AI Act database.

Art 49(2)

836 amendment: Art 1 point 14 — deletes Art 49(2)

Post-market monitoring mandatory template removed

Article 72(3) currently requires the Commission to adopt an implementing act setting a mandatory harmonised template for post-market monitoring plans. COM 836 removes this: instead, the Commission must publish guidance on post-market monitoring plans. Providers gain flexibility in how they structure that documentation, subject to the guidance.

Art 72

836 amendment: Art 1 point 24 — replaces Art 72(3)

Watermarking implementing act replaced by codes of practice

Article 50(7) currently requires the Commission to adopt implementing acts setting harmonised technical standards for marking and labelling AI-generated content. COM 836 removes this mandatory empowerment. The AI Office would instead encourage voluntary codes of practice at Union level. The Commission may assess adequacy and adopt implementing acts only if codes prove insufficient — but is no longer required to do so.

Art 50(7)

836 amendment: Art 1 point 15 — replaces Art 50(7)

PROPOSAL

4. Conformity assessment streamlined

Notified bodies already designated under Annex I Section A sectoral legislation — medical devices, machinery — would qualify for a single application procedure to gain AI Act designation, rather than starting a parallel process from scratch.

Single application for Annex I Section A notified bodies

A conformity assessment body already designated under medical devices, machinery, or other Annex I Section A Union harmonisation legislation could submit a single application to be designated under the AI Act as well. The notifying authority must provide a single, non-duplicative assessment that builds on the existing designation procedure.

Art 28

836 amendment: Art 1 point 10 — adds Art 28(8)

18-month transition for existing Annex I Section A notified bodies

Notified bodies already operating under Annex I Section A legislation before the AI Act enters application would have 18 months from that date to apply for AI Act designation. This prevents a sudden shortage of authorised conformity assessment bodies on the general application date.

Art 43(3)

836 amendment: Art 1 point 13 — replaces Art 43(3)

New Annex XIV — structured codes for notified body scope

A new Annex XIV defines structured codes across three families: AIP codes (0101–0112) for Annex I Section A product AI systems; AIB codes (0201–0209) for biometric AI systems including remote biometric identification, categorisation by protected characteristics, and emotion recognition; and AIH codes for AI technology types — symbolic and logic-based (AIH 0101–0102), machine learning (AIH 0201–0205), generative and GPAI-based AI (AIH 0301–0302), and agentic AI (AIH 0401). Notified bodies use these codes to specify the scope of their designation.

Annex XIV

836 amendment: Art 1 point 33 — inserts Annex XIV; Art 1 point 12 amends Art 30(2)

PROPOSAL

5. AI Office gains exclusive enforcement powers

Article 75, as proposed, creates an exclusive-competence regime for the AI Office over two categories of AI system. For those systems, national market surveillance authorities step aside — the AI Office becomes the sole enforcer and the Commission carries out pre-market conformity assessments.

GPAI-based systems where the model and the AI system share the same provider

If you build both the underlying GPAI model and the AI system that runs on it — for example, a frontier lab that releases a model and also deploys products built on it — the AI Office would be exclusively competent to supervise and enforce AI Act obligations for those downstream products. One key exception: if the system is covered by Annex I Section A product harmonisation legislation (medical devices, machinery, vehicles), national authorities retain jurisdiction.

Art 75(1)

836 amendment: Art 1 point 25(b) — replaces Art 75(1)

AI systems integrated into VLOPs and VLOSEs

AI systems that constitute or are integrated into a Very Large Online Platform (VLOP) or Very Large Online Search Engine (VLOSE) designated under the Digital Services Act would also fall under exclusive AI Office jurisdiction. This aligns oversight of the largest platform AI systems at EU level, regardless of which Member State the platform operates from.

Art 75(1)

836 amendment: Art 1 point 25(b) — replaces Art 75(1)

Commission carries out pre-market conformity for Art 75(1) high-risk systems

For the AI systems covered by the new Article 75(1) scope that are classified high-risk and require third-party conformity assessment under Article 43, the Commission — not a private notified body — would organise and carry out the pre-market conformity assessment. The Commission may delegate performance to a notified body acting on its behalf. Fees are levied on the provider.

Art 75(1c)

836 amendment: Art 1 point 25(c) — inserts Art 75(1c)

PROPOSAL

6. Regulatory sandboxes and real-world testing expanded

An EU-level regulatory sandbox would be added, national sandboxes would gain harmonised governance rules via Commission implementing acts, and real-world testing would extend to Annex I product AI systems.

New EU-level regulatory sandbox via the AI Office

The AI Office would be empowered to establish a Union-level regulatory sandbox for AI systems falling under its exclusive supervision under Article 75(1). It would offer priority access to SMEs, operate in close cooperation with relevant competent authorities, and integrate real-world testing plans. Currently no EU-level sandbox exists — all sandboxes are national.

Art 57(3a)

836 amendment: Art 1 point 17(a) — inserts Art 57(3a)

Harmonised sandbox governance rules via Commission implementing act

The Commission would adopt implementing acts specifying eligibility criteria, application procedures, participation terms, and governance rules for all AI regulatory sandboxes. This replaces fragmented national approaches with a common framework and includes rules on coordination between national and EU-level sandboxes.

Art 58(1)

836 amendment: Art 1 point 18 — replaces Art 58(1)

Real-world testing extended to Annex I Section A and Section B products

Article 60 real-world testing outside sandboxes would extend to high-risk AI systems in Annex I Section A products (medical devices, machinery, motor vehicles). A new Article 60a creates a separate route for Annex I Section B products (civil aviation, certain automotive type-approval systems) via voluntary written agreements between interested Member States and the Commission.

Art 60 / 60a

836 amendments: Art 1 points 19 and 20

PROPOSAL

7. New legal basis to process sensitive data for bias detection

A new Article 4a would create an AI-Act-specific legal basis for any provider or deployer to process special categories of personal data — health data, biometric data, data on racial or ethnic origin — where strictly necessary to detect and correct bias in an AI system.

Article 4a — bias detection data right for all AI systems

Under current law, processing special category data for bias correction is only explicitly permitted for high-risk AI systems under Article 10(5). COM 836 deletes Article 10(5) and replaces it with Article 4a, which applies to providers and deployers of any AI system or GPAI model. Six conditions must all be met simultaneously:

  1. 1.The bias cannot be effectively addressed using synthetic or anonymised data instead
  2. 2.State-of-the-art security and privacy-preserving measures apply, including pseudonymisation
  3. 3.Strict access controls and documentation of who accessed the data and when
  4. 4.No transfer to or access by third parties
  5. 5.Data is deleted once the bias is corrected or the retention period ends, whichever comes first
  6. 6.GDPR records of processing activities must explain why special category data was necessary and why anonymised data was insufficient
Art 4a / 10

836 amendments: Art 1 points 5 and 7 — inserts Art 4a, amends Art 10

PROPOSAL

8. Agentic AI formally recognised in Annex XIV

The new Annex XIV classification codes include AIH 0401 — agentic AI — as a distinct technology type alongside symbolic AI, machine learning, and generative AI. This is the first time a specific EU AI Act provision names agentic AI directly.

Agentic AI receives designation code AIH 0401

The new Annex XIV organises AI systems into three code families for notified body designation. The AIH (horizontal AI technology) family covers: symbolic and logic-based systems (AIH 0101–0102), machine learning systems (AIH 0201–0205), generative and GPAI-based AI (AIH 0301–0302), and agentic AI (AIH 0401). Notified bodies seeking AI Act designation must specify which codes fall within their scope. The practical implication: once 836 is enacted, providers of agentic AI systems that fall into a high-risk Annex III domain will need to identify a notified body designated with the AIH 0401 scope. There is no equivalent code under current law — agentic AI has no dedicated legal category in the AI Act today.

Annex XIV

836 amendment: Art 1 point 33 — inserts Annex XIV

What to do while the proposal is in trilogue

These signals suggest COM 836 affects your planning more than it might first appear:

  • →You are building a high-risk Annex III system and had budgeted for August 2026 full obligations — the delay could free up runway, but only if 836 is enacted before your conformity documentation deadline
  • →Your company recently grew past the SME threshold — if the SMC category applies to you, every simplified documentation and penalty cap benefit restores automatically
  • →You build both a GPAI model and an AI product on top of it — if 836 passes, the AI Office becomes your primary regulator, not your national market surveillance authority
  • →You operate a generative AI system already in market — the watermarking grace period to February 2027 is relevant only if your system was live before 2 August 2026
  • →You are a public authority deployer — the August 2030 hard deadline clarifies your planning horizon but does not remove Chapter III obligations; preparation still needs to happen
Check your obligations under current law

No changes are proposed under COM(2025) 837 — Digital Omnibus II for this topic. COM(2025) 837 focuses on GDPR amendments, automated decision-making, cookie consent reform, and data breach notification timelines. It does not further amend the EU AI Act.

Frequently asked questions

Is COM(2025) 836 already law?

No. COM(2025) 836 is a Commission legislative proposal published on 19 November 2025. It enters trilogue — negotiations between the Commission, the European Parliament, and the Council — before it can become law. Until the co-legislators reach agreement and the final text is published in the EU Official Journal, the current AI Act (Regulation (EU) 2024/1689) remains in force unchanged. Plan your compliance against current law; treat the proposed delays as contingency, not certainty.

Does the high-risk deadline delay in COM 836 mean I can stop compliance preparation?

No. The delay is conditional: it depends on the Commission first adopting a decision confirming that adequate harmonised standards, common specifications, and guidelines are available to support compliance. That decision has not been adopted. The fallback dates — 2 December 2027 for Annex III systems and 2 August 2028 for Annex I systems — are the latest possible dates if no decision is reached. Halting preparation now risks being caught unprepared if the Commission decision arrives sooner, triggering obligations earlier than the fallback dates.

Who qualifies as an SMC under COM(2025) 836?

A small mid-cap enterprise (SMC) is defined by reference to Commission Recommendation (EU) 2025/1099, inserted as Article 3(14b) by COM 836. SMCs sit above the SME threshold — companies below 250 employees and either ≤€50M turnover or ≤€43M balance sheet — but below large corporations. If enacted, SMCs gain simplified technical documentation, simplified QMS, the lower-of-percentage-or-amount fine cap, and priority sandbox access. Check separately whether you already qualify as an SME under Recommendation 2003/361/EC, which gives you these benefits under current law without waiting for 836.

Does COM(2025) 836 change the Article 5 prohibited AI practices?

No. COM(2025) 836 does not amend Article 5. The eight prohibited AI practices — social scoring, real-time remote biometric identification in publicly accessible spaces, subliminal manipulation below conscious perception, exploitation of vulnerability, biometric categorisation inferring protected characteristics, emotion recognition in workplace or education settings, criminal offence prediction targeting individuals, and facial recognition database scraping — remain fully in force from February 2025. The €35,000,000 or 7% maximum fine under Article 99(3) also remains unchanged.

What is the EU-level AI regulatory sandbox proposed in COM(2025) 836?

Article 57(3a), as proposed by COM 836, empowers the AI Office to establish a regulatory sandbox at Union level for AI systems falling under its exclusive supervision — primarily AI systems built on GPAI models from the same provider, and AI systems in Very Large Online Platforms or Search Engines. This is additional to existing national sandboxes. It would offer priority access to SMEs, operate in close cooperation with relevant competent authorities, and be governed by harmonised rules set in Commission implementing acts proposed under Article 58. No EU-level sandbox currently exists under the AI Act.

Related compliance guides

EU AI Act Timeline 2025–2030

Every enforcement deadline in one place

AI Provider Obligations

What you must do if you build AI systems

AI Deployer Obligations

What you must do if you use AI systems

EU AI Act Fines & Penalties

The four fine tiers and the SME/SMC cap rule

Banned AI Practices (Article 5)

The 8 uses that are completely prohibited

EU AI Act + GDPR Interaction

How both regulations apply to your AI system

Know your obligations under current law

COM(2025) 836 is a proposal. The current AI Act applies today. Regumatrix analyses your AI system and returns:

  • Your risk tier — prohibited, high-risk, limited, or minimal
  • Annex classification and domain number
  • The exact obligations that apply to your system today
  • Fine exposure under Article 99 — specific amounts
  • An 8-section cited compliance report with Article references
  • Which COM 836 and COM 837 proposals affect your situation
  • Takes ~30 seconds — no credit card required
Start free — 3 free analyses

No credit card required