The DSA governs online platforms — how they moderate content, manage algorithms, and protect users at scale. The EU AI Act governs AI systems themselves — their risk classification, conformity, and oversight. On a Very Large Online Platform (VLOP), both apply to the same algorithm. COM(2025) 836 makes this intersection even tighter: Article 75 is amended to give the EU AI Office exclusive competence over AI systems embedded in designated VLOPs and Very Large Online Search Engines (VLOSEs) — bypassing national market surveillance authorities entirely.
Digital Services Act (DSA)
Regulation (EU) 2022/2065 — in force since February 2024 for VLOPs
EU AI Act
Regulation (EU) 2024/1689 — most high-risk obligations from August 2026
| Aspect | DSA | EU AI Act |
|---|---|---|
| What it regulates | Online intermediary services — platforms, search engines, hosting services, access providers — based on service type and scale | AI systems placed on or used in the EU market — based on risk tier (prohibited, high-risk, limited-risk, minimal-risk) |
| Who it applies to | Online platforms by size: all intermediaries at some level, with enhanced obligations for VLOPs (≥45M EU users) and VLOSEs | Providers (develop/place on market), deployers (use in professional context), importers, distributors — regardless of platform size |
| Core AI-related obligation | Recommender system transparency (Art 27); systemic risk assessment for algorithmic harm (Arts 34–35); DSA audits (Art 37) | Risk management, technical documentation, conformity assessment, human oversight, Art 50 transparency disclosure, serious incident reporting |
| Recommendation system | VLOPs must offer users at least one recommender option not based on profiling; must explain main parameters used (Art 27) | Recommendation systems are not automatically high-risk. They become subject to Art 50(1) disclosure if they interact directly with users; deployer must inform users they are interacting with an AI system unless obvious |
| Systemic/algorithmic risk | Mandatory systemic risk assessment (Art 34) — VLOPs must assess risks from algorithmic amplification: health, civic discourse, election integrity, gender violence, information manipulation | No equivalent systemic risk obligation for platforms. AI Act risk management (Art 9) is system-specific, not platform-wide |
| Enforcement authority | European Commission (for VLOPs/VLOSEs directly) + Digital Services Coordinators (national, for smaller platforms) | National market surveillance authorities; EU AI Office for GPAI models. Under 836: AI Office exclusively for VLOP-integrated AI and GPAI-same-provider systems |
| Penalty maximum | 6% of global annual worldwide turnover (VLOPs), or 1% for information obligations; temporary access restriction for repeated breaches | €35M/7% for prohibited AI; €15M/3% for high-risk obligation violations; €7.5M/1.5% for incorrect information |
| When it applies | Applicable since February 2024 (VLOPs/VLOSEs) and August 2023 phased roll-out | February 2025 (prohibitions); August 2026 (most high-risk obligations including conformity assessment) |
AI content disclosure — Art 50(1) AI Act ↔ DSA Art 52(2)
AI Act Art 50(1) requires providers to design AI systems that interact directly with natural persons to inform those users they are interacting with an AI system. DSA Art 52(2) requires online platforms that use recommender systems/AI to explain the main parameters used for content recommendations and offer an option to modify those parameters. Both apply to chatbots, AI-driven feeds, and interactive AI features on VLOPs. AI Act Art 50 is a provider obligation; DSA Art 52 is a platform operator obligation — but the same AI feature triggers both.
Risk assessment — AI Act Art 9 ↔ DSA Art 34
AI Act Art 9 requires a risk management system for every high-risk AI system — identifying hazards, estimating risks, evaluating risks across the lifecycle. DSA Art 34 requires VLOPs and VLOSEs to identify systemic risks arising from their services at platform level — including risks from algorithmic systems, advertising, and content amplification. These are parallel but independent obligations: Art 9 is system-level and about harm to individual users; DSA Art 34 is platform-level and about harm to society. Conducting one does not satisfy the other.
Human oversight — AI Act Art 14 ↔ DSA Art 35
AI Act Art 14 requires high-risk AI systems to have human oversight capabilities — including the ability to override or stop the AI. DSA Art 35 requires VLOPs to put in place risk mitigation measures identified in their Art 34 assessment — which may include human reviewers, oversight mechanisms, or algorithmic guardrails. The measures overlap in practice for content moderation AI on VLOPs, where AI Act Art 14 human override and DSA Art 35 human review teams may be implemented by the same teams.
GPAI models powering VLOP features — GPAI Chapter ↔ DSA Art 34
A VLOP that uses a GPAI model to power its search, feed, or recommendation features faces intersecting obligations: the GPAI Chapter of the AI Act (Arts 51–56) governs the model provider's systemic risk assessment (for frontier models), model cards, and transparency; the DSA Art 34 systemic risk assessment governs the VLOP's platform-level risks from deploying that model's outputs at scale. For platforms that develop and deploy their own GPAI models (Google, Meta), the Art 75 amendment under 836 makes the AI Office the single enforcement authority for both.
Is your platform subject to both the AI Act and DSA?
Regumatrix maps your AI systems against both frameworks, identifies which DSA systemic risk assessments must cover AI-specific risks, and flags whether your system falls under AI Office jurisdiction under the 836 Art 75 amendment.
Check my platform obligations — 3 free analysesUnder current law, the AI Act is enforced by national market surveillance authorities — a VLOP operating across 20 Member States could face 20 separate national investigations for the same AI system. 836 resolves this fragmentation by centralising enforcement.
AI Office exclusive competence — amended Art 75(1)
The AI Office shall be exclusively competent for the supervision and enforcement of the AI Act in relation to:
AI Office powers under the new rule
When exercising this exclusive competence, the AI Office has all the powers of a market surveillance authority — including the power to require documentation, conduct investigations, order corrective actions, and impose fines under AI Act Art 99. A separate implementing act will define the detailed enforcement procedures and penalty conditions. National MSAs lose jurisdiction over VLOP-integrated AI for the purposes listed above.
Interaction with DSA enforcement
The European Commission enforces the DSA for VLOPs directly. Under 836, it is also the AI Office (a Commission body) that enforces the AI Act for VLOP AI systems. This creates a single institutional locus — the Commission — responsible for both DSA platform governance and AI Act AI system compliance at VLOPs. In practice, this should facilitate joint investigations and help end the spectacle of national MSAs and the Commission running parallel probes into the same AI feature on the same platform.
Confirm DSA applicability
Check whether your platform is designated as a VLOP or VLOSE. DSA designation by the European Commission triggers both the enhanced DSA obligations and (under 836) the AI Act's AI Office enforcement track.
Map AI systems on the platform
Catalogue all AI systems embedded in the platform — recommendation engines, content moderation AI, AI search ranking, chatbots, advertising auction AI. Each is a separate AI Act subject and may also be in scope of DSA Art 34 systemic risk assessment.
Classify under AI Act risk tiers
Most recommendation and content moderation AI systems are not in Annex III (high-risk). They are limited-risk (Art 50 disclosure) or minimal-risk. A credit scoring or HR AI feature embedded in a VLOP would be Annex III high-risk. Check if yours qualify for GPAI Chapter obligations (Arts 51–56).
Run parallel assessments
DSA Art 34 systemic risk assessment and AI Act Art 9 risk management must both be run — independently. They do not satisfy each other, but outputs can inform each other. Document separately to demonstrate compliance with each authority.
Prepare for AI Office jurisdiction (836)
If 836 is adopted, AI Act enforcement for your VLOP AI systems shifts to the AI Office. Engage with the AI Office as the primary contact for AI Act compliance matters. AI Act documentation requirements remain the same — only the enforcing authority changes.
Yes. The Digital Services Act (DSA) applies to all online intermediary services, including Very Large Online Platforms (VLOPs) with 45 million or more average monthly active users in the EU. The EU AI Act applies to AI systems placed on or used in the EU market regardless of the platform operator's size. A VLOP that deploys recommendation systems, content moderation AI, or AI-powered search features must comply with both: the DSA for platform governance and algorithmic transparency, and the EU AI Act for AI system-specific obligations (risk management, human oversight, conformity assessment, transparency disclosures).
The DSA focuses on platform-level governance obligations that the AI Act does not address. Key DSA-specific requirements include: mandatory systemic risk assessments (Art 34) and risk mitigation measures (Art 35) for VLOPs and VLOSEs covering algorithmic amplification, manipulation, and fundamental rights risks; recommender system transparency and user choice (Art 27) — VLOPs must offer at least one option not based on profiling; annual independent audits (Art 37); access to data for vetted researchers; and enhanced advertising transparency. The AI Act does not impose these platform-governance obligations.
The EU AI Act requires AI system-specific obligations that go beyond DSA requirements: risk management systems (Art 9) for high-risk AI; conformity assessments (Art 43); notified body involvement for certain systems; CE marking; technical documentation (Art 11); post-market monitoring (Art 72); serious incident reporting (Art 73); registration in the EU database for high-risk systems (Art 49); and detailed data governance requirements (Art 10). The DSA has no equivalent to these product—compliance obligations. A VLOP must maintain DSA transparency reports and systemic risk assessments, but the AI Act's conformity machinery is completely separate.
Under current law, the EU AI Act is enforced by national market surveillance authorities (MSAs) in each Member State where the VLOP operates. COM(2025) 836 proposes to amend Article 75 of the AI Act to give the AI Office exclusive competence for two categories: (1) AI systems built on general-purpose AI (GPAI) models where the GPAI provider and the system provider are the same entity (excluding Annex I product safety systems); and (2) AI systems constituting or integrated into a designated VLOP or VLOSE under the DSA. This means AI systems embedded in platforms like Google Search, Meta Instagram, or TikTok would be supervised and enforced by the EU AI Office — not 27 separate national MSAs. The AI Office acquires all market surveillance authority powers, including the ability to impose penalties.
No. DSA Art 34 systemic risk assessment and AI Act Art 9 risk management system are separate obligations measuring different things. The DSA systemic risk assessment examines platform-level risks from algorithmic amplification — societal harms, information hazards, gender-based violence, electoral manipulation. The AI Act risk management system examines AI system-specific risks to the health, safety, and fundamental rights of the individuals it affects. A recommendation algorithm risk assessment under DSA Art 34 does not satisfy AI Act Art 9. Both must be conducted independently.
GPAI Systemic Risk — AI Act Arts 51–56
Frontier GPAI model obligations including systemic risk assessment — feeds into DSA Art 34 for VLOP deployments.
EU AI Act vs GDPR — Key Differences
At VLOPs, GDPR also applies to personal data processing by recommendation and targeting AI.
AI Transparency Obligations (Art 50)
Art 50 disclosures apply to AI-user interactions on VLOPs — parallel to DSA Art 52 recommender transparency.
EU AI Act Market Surveillance & Enforcement
How the AI Office and national MSAs divide enforcement — and how 836 shifts VLOP jurisdiction to the AI Office.
Digital Omnibus 836 — Full Overview
All AI Act changes in COM(2025) 836 including the Art 75 enforcement overhaul.
General-Purpose AI Models Compliance Guide
GPAI model obligations under Arts 51–56 — directly relevant when VLOP AI is GPAI-based.
Regumatrix identifies every AI system on your platform, maps then against AI Act risk tiers and DSA systemic risk categories, and generates a combined compliance checklist for both regulators.
Start free analysis