Enforcement under the EU AI Act is divided between national market surveillance authorities and the AI Office. Who investigates your system depends on the type of AI, how it is used, and which sector it operates in.
The EU AI Act does not create a wholly new enforcement infrastructure. Art 74(1) applies Regulation (EU) 2019/1020 (the Union Market Surveillance Regulation) to AI systems, treating them as “products” and AI operators as “economic operators” for enforcement purposes. This means the market surveillance powers, mutual assistance procedures, and safeguard mechanisms that apply to physical products also apply to AI systems — adapted for the specific characteristics of AI.
Which authority has jurisdiction over your AI system? Regumatrix identifies your applicable MSA and enforcement path automatically →
Article 74 designates different market surveillance authorities depending on the AI system's sector and use. Identify all categories that apply to your system.
Authority: Sectoral market surveillance authority
For AI systems embedded in products covered by Union harmonisation legislation listed in Annex I Section A (e.g., medical devices, machinery, toys), the MSA is whichever authority is responsible for market surveillance under that product regulation. For AI systems used by financial institutions, the national financial supervisory authority acts as MSA.
Authority: Data protection supervisory authority (or equivalent)
Member States must designate as MSA either the competent data protection supervisory authority under GDPR/LED (Directive 2016/680) or any other authority satisfying the independence conditions in Articles 41–44 of the LED. Market surveillance activities must not interfere with judicial authorities acting in their judicial capacity.
Authority: European Data Protection Supervisor (EDPS)
The EDPS acts as market surveillance authority for Union institutions, bodies, offices, and agencies that fall within the regulation's scope — except for the Court of Justice of the European Union when acting in its judicial capacity.
Authority: AI Office (via European Commission)
The Commission has exclusive powers to supervise and enforce Chapter V (Arts 53–55, Art 88). Where the GPAI model and the AI system using it are developed by the same provider, the AI Office has full market surveillance authority powers over that high-risk AI system too. National MSAs may request the Commission's assistance; the AI Office must supply relevant information within 30 days.
Market surveillance authorities must be granted full access to documentation and to training, validation, and testing datasets used to develop high-risk AI systems. This includes remote access via APIs or other technical means. Authorities are not required to justify access to data separately from their general market surveillance mandate.
Access to source code requires a reasoned request and only where both conditions are met: (a) source code access is necessary to assess conformity under Chapter III Section 2, and (b) testing and auditing based on the documentation and data already provided have been exhausted or proved insufficient. This is a higher threshold than dataset access — it protects proprietary algorithms while still enabling compliance review in genuinely contested cases.
Market surveillance authorities supervise testing in real world conditions (regulatory sandboxes and other real-world testing). They may suspend or terminate testing, or require modifications, if a serious incident occurs or if the conditions of Articles 60–61 are not met.
Where an AI system presents a risk to health, safety, or fundamental rights, the authority may require corrective action within 15 working days. If the operator fails to act, the authority can provisionally prohibit or restrict the system, withdraw it from the market, or recall it. Even a compliant AI system may be restricted under Article 82 if it presents a real-world risk despite meeting all formal requirements.
National public authorities supervising compliance with fundamental rights obligations (including non-discrimination law) may separately request and access documentation created under the EU AI Act when reviewing high-risk AI systems in Annex III. Where documentation is insufficient, they may request the MSA to organise technical testing of the system.
Article 85 provides that any natural or legal person who has grounds to consider that the EU AI Act has been infringed may submit a complaint to the relevant market surveillance authority. The exact text:
“Without prejudice to other administrative or judicial remedies, any natural or legal person having grounds to consider that there has been an infringement of the provisions of this Regulation may submit complaints to the relevant market surveillance authority. In accordance with Regulation (EU) 2019/1020, such complaints shall be taken into account for the purpose of conducting market surveillance activities, and shall be handled in line with the dedicated procedures established therefor by the market surveillance authorities.”
The Commission, acting through the AI Office, has exclusive competence to supervise and enforce Chapter V (General-Purpose AI Models, Arts 53–55) under Article 88(1). This means:
All information and data obtained by market surveillance authorities, the Commission, and notified bodies in the course of applying the EU AI Act is subject to strict confidentiality obligations under Article 78. Specifically protected are: intellectual property rights and trade secrets (including source code), national and public security interests, ongoing criminal or administrative proceedings, and information classified under Union or national law. Authorities must put in place adequate cybersecurity measures to protect acquired data and must delete data as soon as it is no longer needed. Special rules apply for law enforcement operational data — including the requirement that technical documentation remains within the premises of law enforcement authorities when they are also providers of high-risk AI systems.
COM(2025) 837 (the Digital Omnibus II regulation) does not amend the market surveillance framework established by Articles 74–84 or the enforcement provisions of Chapter IX. See the 837 overview →
It depends on the type of AI system and how it is used. For high-risk AI systems integrated into regulated products (e.g., medical devices, machinery), the market surveillance authority is the sectoral authority designated under the relevant Union harmonisation legislation. For high-risk AI systems used by financial institutions, it is the national financial supervisory authority. For high-risk AI systems used by law enforcement, border management, or justice authorities, Member States designate the data protection supervisory authority or another authority under conditions matching Directive 2016/680. For GPAI models and systems based on GPAI models developed by the same provider, the AI Office at the Commission has exclusive supervisory and enforcement powers under Article 88.
Yes. Article 85 of the EU AI Act gives any natural or legal person the right to lodge a complaint with the relevant market surveillance authority if they have grounds to consider that the regulation has been infringed. The complaint must be taken into account for market surveillance activities. Market surveillance authorities handle complaints through their established procedures. Article 85 is separate from the right to explanation under Article 86, which gives affected persons a right to ask for an explanation of the specific decision made about them.
Under Article 74(12), market surveillance authorities are granted full access to documentation as well as training, validation, and testing datasets — including remote access via APIs or other technical means. Under Article 74(13), authorities may also request access to the source code of a high-risk AI system, but only on a reasoned request and only where: (a) source code access is necessary to assess conformity under Chapter III Section 2, and (b) testing and auditing based on data and documentation have been exhausted or proved insufficient. All information obtained is subject to the confidentiality obligations in Article 78.
The AI Office, established within the European Commission, has exclusive powers to supervise and enforce Chapter V (General-Purpose AI Models) under Article 88. This means only the Commission — acting through the AI Office — can investigate and take enforcement action against GPAI model providers. National market surveillance authorities may request the Commission to exercise those powers where necessary. The AI Office also coordinates joint investigations across Member States (Art 74(11)), provides 30-day information assistance to national authorities investigating high-risk systems built on GPAI models (Art 75(3)), and monitors compliance with approved codes of practice by GPAI providers (Art 89).
Under Article 79, the authority evaluates the system and, if it finds non-compliance, requires the operator to take corrective action within 15 working days (or sooner under applicable Union harmonisation legislation). If the operator fails to act, the authority can provisionally prohibit or restrict the system, withdraw it from the market, or recall it. These provisional measures are notified to the Commission and other Member States. If no other Member State or the Commission objects within three months (30 days for Article 5 prohibited-practice violations), the measure is deemed justified. Operators retain procedural rights under Article 18 of Regulation (EU) 2019/1020.
EU AI Act Penalties
Art 99 — fine tiers, €35M/7% for Art 5, €15M/3% for high-risk, €7.5M/1.5% for GPAI
Right to Explanation of AI Decisions
Art 86 — individual right to ask deployers how a high-risk AI system shaped a decision
High-Risk AI Checklist
Classify your system against all 8 Annex III domains before MSA contact
AI Provider Obligations
Full Chapter III stack — conformity assessment, CE marking, EU database registration
Conformity Assessment
Internal control (Annex VI) vs notified body (Annex VII) — determine your track
Prohibited AI Practices (Art 5)
The 8 banned practices — Art 5 violations trigger 30-day (not 3-month) Union safeguard procedure
Regumatrix identifies which market surveillance authority governs your AI system, what documentation you must be ready to provide, and whether your system could trigger an Article 85 complaint — so you can address gaps before an investigation opens.
Get started free