In November 2025, the European Commission proposed 33 amendments to the EU AI Act under the name “Digital Omnibus on AI.” If enacted, the changes delay high-risk compliance deadlines, create relief for SMEs and a new SMC category, streamline conformity assessment, and hand the AI Office exclusive enforcement powers over GPAI-based AI systems and digital platforms.
Why this matters for your planning right now
The proposal is in trilogue as of early 2026. Until it is enacted, current AI Act deadlines stand. Your general obligations apply from 2 August 2026 (4 months away). If COM 836 is enacted, high-risk Chapter III obligations could shift to a fallback no later than 2 December 2027 or 2 August 2028 — but only if the Commission decision mechanism fails to trigger earlier. Do not treat the proposed delays as permission to pause compliance work.
Not sure which obligations apply to your system under current law?
Regumatrix checks your AI system against every article of the EU AI Act — risk tier, Annex classification, the exact obligations that apply today, and your fine exposure under Article 99. Takes ~30 seconds.
Check your obligations freeThe most commercially significant change: Chapter III high-risk obligations would no longer trigger on a fixed calendar date. They would depend on a Commission decision confirming adequate support measures — harmonised standards, common specifications, and guidelines — with hard fallback dates if that decision is not made. Aug 2026 for prohibited practices (Art 5) and transparency obligations (Art 50) is not affected.
Chapter III Sections 1, 2, and 3 — every high-risk obligation from risk management through post-market monitoring — would apply 6 months after the Commission adopts a decision confirming adequate harmonised standards and guidelines. If no decision is adopted in time, the hard fallback is 2 December 2027 (20 months away). This covers Annex III systems: AI in biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, and justice.
836 amendment: Art 1 point 31 — adds Art 113 third paragraph point (d)(i)
AI systems embedded in products covered by Annex I Section A Union harmonisation legislation — medical devices, machinery, civil aviation, motor vehicles, toys — would follow Chapter III 12 months after the Commission decision. Hard fallback: 2 August 2028 (28 months away). Note: only Section A systems use this route; Section B aviation and automotive systems have a separate pathway under the proposal.
836 amendment: Art 1 point 31 — adds Art 113 third paragraph point (d)(ii)
Providers of AI systems that generate synthetic audio, image, video, or text content and that were already on the market before 2 August 2026 would have until 2 February 2027 (10 months away) to comply with Article 50(2) marking obligations. This applies to the generation-side obligation — AI systems that output synthetic content must mark it. The general Art 50 application date of August 2026 still applies to new systems placed on the market from that date.
836 amendment: Art 1 point 30 — adds Art 111(4) transitional provision
Providers and deployers of high-risk AI systems intended for use by public authorities — government agencies, public hospitals, public courts, public universities — would have until 2 August 2030 (53 months away) to comply with all Chapter III obligations. This is a hard deadline with no Commission decision mechanism; it is not subject to the earlier fallback dates.
836 amendment: Art 1 point 30 — amends Art 111(2)
COM 836 introduces 'small mid-cap enterprises' (SMCs) as a new protected category in the AI Act and extends every SME privilege to cover SMCs as well. If you are an SME today, your benefits are unchanged. If your company recently grew past the SME threshold, the SMC category may restore equivalent relief.
A new Article 3(14b) would define SMCs by reference to Commission Recommendation (EU) 2025/1099. SMCs sit above SMEs in company size but below large corporations. The definition is added alongside the existing SME definition in Article 3(14a), which references Recommendation 2003/361/EC.
836 amendment: Art 1 point 3 — inserts Art 3(14a) and (14b)
SMCs and SMEs (including start-ups) could provide the Annex IV technical documentation in a simplified form. The Commission must create that form. Notified bodies would be required to accept it for conformity assessments — preventing them from demanding full-format documentation from smaller providers.
836 amendment: Art 1 point 8 — amends Art 11(1)
Under current law, only microenterprises qualify for the simplified quality management system option under Article 63. COM 836 extends this to all SMEs, including start-ups. The Commission must develop guidelines specifying which QMS elements may be met in simplified form — without reducing the level of protection required for the AI system itself.
836 amendment: Art 1 point 21 — replaces Art 63(1)
Article 99(6) currently applies the lower-of-percentage-or-fixed-amount cap only to SMEs. COM 836 extends this to SMCs. For a violation carrying the €35,000,000 or 7% maximum fine under Article 99(3), an SMC whose 7% figure is lower than €35M pays the lower number. The proportionality instruction in Article 99(1) — requiring Member States to consider SME and SMC economic viability when imposing penalties — is also extended to SMCs.
836 amendment: Art 1 point 29 — amends Art 99(1) and (6)
Four mandatory obligations are removed or replaced with softer alternatives — binding rules become guidance, codes of practice, or flexibility for providers to structure their own approach.
Article 4 currently places a binding obligation on providers and deployers to ensure adequate AI literacy for their staff. COM 836 replaces this entirely: the Commission and Member States shall encourage providers and deployers to take literacy measures, but there is no longer a direct legal requirement on companies. The sanction risk for this specific obligation disappears if enacted.
836 amendment: Art 1 point 4 — replaces Art 4
Article 49(2) requires providers who classify their Annex III system as non-high-risk under Article 6(3) to register it in the EU database anyway. COM 836 deletes this paragraph entirely. If your system qualifies for the Article 6(3) non-high-risk derogation — meaning it does not materially affect outcomes for natural persons — you would no longer need to register it in the EU AI Act database.
836 amendment: Art 1 point 14 — deletes Art 49(2)
Article 72(3) currently requires the Commission to adopt an implementing act setting a mandatory harmonised template for post-market monitoring plans. COM 836 removes this: instead, the Commission must publish guidance on post-market monitoring plans. Providers gain flexibility in how they structure that documentation, subject to the guidance.
836 amendment: Art 1 point 24 — replaces Art 72(3)
Article 50(7) currently requires the Commission to adopt implementing acts setting harmonised technical standards for marking and labelling AI-generated content. COM 836 removes this mandatory empowerment. The AI Office would instead encourage voluntary codes of practice at Union level. The Commission may assess adequacy and adopt implementing acts only if codes prove insufficient — but is no longer required to do so.
836 amendment: Art 1 point 15 — replaces Art 50(7)
Notified bodies already designated under Annex I Section A sectoral legislation — medical devices, machinery — would qualify for a single application procedure to gain AI Act designation, rather than starting a parallel process from scratch.
A conformity assessment body already designated under medical devices, machinery, or other Annex I Section A Union harmonisation legislation could submit a single application to be designated under the AI Act as well. The notifying authority must provide a single, non-duplicative assessment that builds on the existing designation procedure.
836 amendment: Art 1 point 10 — adds Art 28(8)
Notified bodies already operating under Annex I Section A legislation before the AI Act enters application would have 18 months from that date to apply for AI Act designation. This prevents a sudden shortage of authorised conformity assessment bodies on the general application date.
836 amendment: Art 1 point 13 — replaces Art 43(3)
A new Annex XIV defines structured codes across three families: AIP codes (0101–0112) for Annex I Section A product AI systems; AIB codes (0201–0209) for biometric AI systems including remote biometric identification, categorisation by protected characteristics, and emotion recognition; and AIH codes for AI technology types — symbolic and logic-based (AIH 0101–0102), machine learning (AIH 0201–0205), generative and GPAI-based AI (AIH 0301–0302), and agentic AI (AIH 0401). Notified bodies use these codes to specify the scope of their designation.
836 amendment: Art 1 point 33 — inserts Annex XIV; Art 1 point 12 amends Art 30(2)
Article 75, as proposed, creates an exclusive-competence regime for the AI Office over two categories of AI system. For those systems, national market surveillance authorities step aside — the AI Office becomes the sole enforcer and the Commission carries out pre-market conformity assessments.
If you build both the underlying GPAI model and the AI system that runs on it — for example, a frontier lab that releases a model and also deploys products built on it — the AI Office would be exclusively competent to supervise and enforce AI Act obligations for those downstream products. One key exception: if the system is covered by Annex I Section A product harmonisation legislation (medical devices, machinery, vehicles), national authorities retain jurisdiction.
836 amendment: Art 1 point 25(b) — replaces Art 75(1)
AI systems that constitute or are integrated into a Very Large Online Platform (VLOP) or Very Large Online Search Engine (VLOSE) designated under the Digital Services Act would also fall under exclusive AI Office jurisdiction. This aligns oversight of the largest platform AI systems at EU level, regardless of which Member State the platform operates from.
836 amendment: Art 1 point 25(b) — replaces Art 75(1)
For the AI systems covered by the new Article 75(1) scope that are classified high-risk and require third-party conformity assessment under Article 43, the Commission — not a private notified body — would organise and carry out the pre-market conformity assessment. The Commission may delegate performance to a notified body acting on its behalf. Fees are levied on the provider.
836 amendment: Art 1 point 25(c) — inserts Art 75(1c)
An EU-level regulatory sandbox would be added, national sandboxes would gain harmonised governance rules via Commission implementing acts, and real-world testing would extend to Annex I product AI systems.
The AI Office would be empowered to establish a Union-level regulatory sandbox for AI systems falling under its exclusive supervision under Article 75(1). It would offer priority access to SMEs, operate in close cooperation with relevant competent authorities, and integrate real-world testing plans. Currently no EU-level sandbox exists — all sandboxes are national.
836 amendment: Art 1 point 17(a) — inserts Art 57(3a)
The Commission would adopt implementing acts specifying eligibility criteria, application procedures, participation terms, and governance rules for all AI regulatory sandboxes. This replaces fragmented national approaches with a common framework and includes rules on coordination between national and EU-level sandboxes.
836 amendment: Art 1 point 18 — replaces Art 58(1)
Article 60 real-world testing outside sandboxes would extend to high-risk AI systems in Annex I Section A products (medical devices, machinery, motor vehicles). A new Article 60a creates a separate route for Annex I Section B products (civil aviation, certain automotive type-approval systems) via voluntary written agreements between interested Member States and the Commission.
836 amendments: Art 1 points 19 and 20
A new Article 4a would create an AI-Act-specific legal basis for any provider or deployer to process special categories of personal data — health data, biometric data, data on racial or ethnic origin — where strictly necessary to detect and correct bias in an AI system.
Under current law, processing special category data for bias correction is only explicitly permitted for high-risk AI systems under Article 10(5). COM 836 deletes Article 10(5) and replaces it with Article 4a, which applies to providers and deployers of any AI system or GPAI model. Six conditions must all be met simultaneously:
836 amendments: Art 1 points 5 and 7 — inserts Art 4a, amends Art 10
The new Annex XIV classification codes include AIH 0401 — agentic AI — as a distinct technology type alongside symbolic AI, machine learning, and generative AI. This is the first time a specific EU AI Act provision names agentic AI directly.
The new Annex XIV organises AI systems into three code families for notified body designation. The AIH (horizontal AI technology) family covers: symbolic and logic-based systems (AIH 0101–0102), machine learning systems (AIH 0201–0205), generative and GPAI-based AI (AIH 0301–0302), and agentic AI (AIH 0401). Notified bodies seeking AI Act designation must specify which codes fall within their scope. The practical implication: once 836 is enacted, providers of agentic AI systems that fall into a high-risk Annex III domain will need to identify a notified body designated with the AIH 0401 scope. There is no equivalent code under current law — agentic AI has no dedicated legal category in the AI Act today.
836 amendment: Art 1 point 33 — inserts Annex XIV
These signals suggest COM 836 affects your planning more than it might first appear:
No changes are proposed under COM(2025) 837 — Digital Omnibus II for this topic. COM(2025) 837 focuses on GDPR amendments, automated decision-making, cookie consent reform, and data breach notification timelines. It does not further amend the EU AI Act.
No. COM(2025) 836 is a Commission legislative proposal published on 19 November 2025. It enters trilogue — negotiations between the Commission, the European Parliament, and the Council — before it can become law. Until the co-legislators reach agreement and the final text is published in the EU Official Journal, the current AI Act (Regulation (EU) 2024/1689) remains in force unchanged. Plan your compliance against current law; treat the proposed delays as contingency, not certainty.
No. The delay is conditional: it depends on the Commission first adopting a decision confirming that adequate harmonised standards, common specifications, and guidelines are available to support compliance. That decision has not been adopted. The fallback dates — 2 December 2027 for Annex III systems and 2 August 2028 for Annex I systems — are the latest possible dates if no decision is reached. Halting preparation now risks being caught unprepared if the Commission decision arrives sooner, triggering obligations earlier than the fallback dates.
A small mid-cap enterprise (SMC) is defined by reference to Commission Recommendation (EU) 2025/1099, inserted as Article 3(14b) by COM 836. SMCs sit above the SME threshold — companies below 250 employees and either ≤€50M turnover or ≤€43M balance sheet — but below large corporations. If enacted, SMCs gain simplified technical documentation, simplified QMS, the lower-of-percentage-or-amount fine cap, and priority sandbox access. Check separately whether you already qualify as an SME under Recommendation 2003/361/EC, which gives you these benefits under current law without waiting for 836.
No. COM(2025) 836 does not amend Article 5. The eight prohibited AI practices — social scoring, real-time remote biometric identification in publicly accessible spaces, subliminal manipulation below conscious perception, exploitation of vulnerability, biometric categorisation inferring protected characteristics, emotion recognition in workplace or education settings, criminal offence prediction targeting individuals, and facial recognition database scraping — remain fully in force from February 2025. The €35,000,000 or 7% maximum fine under Article 99(3) also remains unchanged.
Article 57(3a), as proposed by COM 836, empowers the AI Office to establish a regulatory sandbox at Union level for AI systems falling under its exclusive supervision — primarily AI systems built on GPAI models from the same provider, and AI systems in Very Large Online Platforms or Search Engines. This is additional to existing national sandboxes. It would offer priority access to SMEs, operate in close cooperation with relevant competent authorities, and be governed by harmonised rules set in Commission implementing acts proposed under Article 58. No EU-level sandbox currently exists under the AI Act.
EU AI Act Timeline 2025–2030
Every enforcement deadline in one place
AI Provider Obligations
What you must do if you build AI systems
AI Deployer Obligations
What you must do if you use AI systems
EU AI Act Fines & Penalties
The four fine tiers and the SME/SMC cap rule
Banned AI Practices (Article 5)
The 8 uses that are completely prohibited
EU AI Act + GDPR Interaction
How both regulations apply to your AI system
COM(2025) 836 is a proposal. The current AI Act applies today. Regumatrix analyses your AI system and returns:
No credit card required