The EU AI Act (Regulation (EU) 2024/1689) is the world's first comprehensive AI regulation — a directly applicable law that classifies AI by risk level and penalises violations with fines up to €35 million or 7% of global turnover. This guide covers everything: who it applies to, the 5 risk tiers, the 8 high-risk domains, compliance deadlines, and what COM(2025) 836 proposes to change.
Key facts at a glance
2 Aug 2026
Main compliance deadline (most organisations)
€35M / 7%
Maximum fine for prohibited AI practices
113 articles
Plus 13 annexes, applying across the EU
Not sure if the AI Act applies to your organisation?
Run a free AI compliance analysis — our tool maps your AI use cases to the relevant articles in under 5 minutes.
Analyse my system free — 3 checks includedThe EU AI Act is a directly applicable European Union regulation (Regulation (EU) 2024/1689) that sets harmonised rules for the development, placing on the market, and use of AI systems. It was published in the Official Journal on 12 July 2024 and entered into force on 1 August 2024.
Under Article 1, the regulation has seven stated purposes: creating a single legal framework for AI in the EU; prohibiting AI practices that pose unacceptable risks; establishing requirements for high-risk AI; imposing transparency obligations; setting rules for general-purpose AI models; establishing governance and market surveillance mechanisms; and supporting innovation by SMEs.
The AI Act defines an AI system (Article 3(1)) as a machine-based system that, for a given set of objectives, operates with varying levels of autonomy and infers outputs — such as predictions, content, recommendations, or decisions — that can influence real or virtual environments.
Article 2 sets a broad territorial scope. The regulation applies to any of the following who operate within or direct their activities toward the EU:
Develops and places AI on the EU market under their own name/trademark — bears the primary compliance burden.
Uses an AI system in their own operations under their authority. Includes businesses using AI-powered SaaS products that qualify as high-risk.
Brings AI systems from third-country providers onto the EU market.
Makes AI systems available on the EU market without being the provider.
Integrates an AI system as a safety component of a regulated product (e.g., a medical device, vehicle, or machinery).
Providers and deployers established outside the EU where the AI system's output is used within the EU.
Your obligations — and the penalty for non-compliance — depend entirely on which tier your AI falls into.
Outright bans: social scoring, subliminal manipulation, facial scraping, emotion AI at work/school, criminal personality profiling, real-time biometric ID.
Max penalty: €35M / 7% of global annual turnover (whichever is higher, lower for SMEs)
AI in biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, and justice — plus AI safety components in regulated products.
Max penalty: €15M / 3% of global annual turnover (whichever is higher, lower for SMEs)
Chatbots must disclose they're AI. Deepfake content must be labelled. Emotion recognition and biometric categorisation systems must notify users.
Max penalty: €15M / 3% of global annual turnover (whichever is higher, lower for SMEs)
GPAI model providers must maintain technical documentation, a copyright policy, and a training data summary. High-impact models (≥10²⁵ FLOPs) additionally face adversarial testing and incident reporting.
Max penalty: €15M / 3% of global annual turnover (whichever is higher, lower for SMEs)
The vast majority of AI — spam filters, recommendation engines, inventory tools, basic chatbots used internally — falls here. Only voluntary codes of conduct apply.
Article 5 is already in force — since 2 February 2025
The 8 prohibited AI practices have been enforceable since 2 February 2025. These are not future rules. If your organisation is using any of these practices today, you are already in violation.
See all 8 banned practices with examplesAI systems listed in Annex III are automatically classified as high-risk under Art. 6(2). There is a derogation in Art. 6(3) — AI is not high-risk if it performs only a narrow procedural task, simply improves a prior human activity, or only prepares for a human decision — but profiling of natural persons is always high-risk regardless of this derogation.
A second category of high-risk AI (Art. 6(1)) covers AI that is a safety component of a product requiring third-party conformity assessment under Annex I legislation (e.g., medical devices, machinery, vehicles, aviation equipment).
Even if your AI system is not high-risk, Article 50 may impose transparency obligations on you or your users. These apply from 2 August 2026 and cover four scenarios:
Chapter V creates a dedicated regime for general-purpose AI (GPAI) models — large foundation models trained on vast data that can perform a wide range of tasks. These rules applied from 2 August 2025.
All GPAI model providers must maintain technical documentation (Annex XI), make capability information available to AI system providers that integrate them, adopt a copyright compliance policy, and publish a training data summary. Open-source GPAI model providers are exempt from the documentation and disclosure obligations — unless their model has systemic risk.
What triggers systemic risk status?
Under Article 51, a GPAI model is presumed to have systemic risk if the cumulative computation used in training exceeds 10²⁵ floating-point operations (FLOPs). Models with systemic risk face additional obligations: adversarial testing, incident monitoring and reporting to the AI Office, cybersecurity safeguards, and reporting on energy consumption.
Read the full GPAI systemic risk guideArticle 113 sets these dates directly. Proposed changes from COM(2025) 836 would adjust some deadlines — those entries are marked as proposed.
Regulation enters into force
Regulation (EU) 2024/1689 published in the OJ and took effect.
Art. 113
Prohibited AI ban applies
All 8 Article 5 prohibited practices are now enforceable. Chapter I (definitions and scope) also applies.
Art. 113(3)(a)
GPAI rules and governance apply
Chapter V (GPAI model obligations), Chapter VII (governance and AI Office), and Article 78 (confidentiality) all come into force.
Art. 113(3)(b)
Main compliance deadline
The bulk of the regulation applies — high-risk AI obligations, conformity assessments, EU database registration, transparency obligations (Art. 50), and penalties.
Art. 113
Article 6(1) Annex I products
AI as safety components inside regulated products (medical devices, machinery, vehicles, aviation) must comply — original AI Act deadline.
Art. 113(3)(c)
836 fallback — Annex III
COM(2025) 836 proposes: if the Commission has not confirmed adequate harmonised standards are available, Annex III / Art. 6(2) high-risk obligations apply from this date.
836 Art. 113(d)
836 fallback — Annex I products
COM(2025) 836 proposes: if standards not confirmed, AI in regulated products (Art. 6(1)) must comply from this date instead of August 2027.
836 Art. 113(d)
Public authority deployers
Providers and deployers of high-risk AI intended for public authority use must achieve full compliance — extended deadline under Article 111(2).
Art. 111(2)
For the full timeline with countdown timers, see the EU AI Act compliance timeline guide.
The AI Act assigns different obligations depending on your role in the AI value chain. Being a deployer — e.g., a company using a third-party AI service — does not exempt you from the regulation. Deployers of high-risk AI have significant direct obligations.
Provider obligations (high-risk AI)
Deployer obligations (high-risk AI)
Administrative fines apply to every violation — including technical requirements, documentation gaps, and transparency failures. Art. 5 fines have applied since February 2025; all others from August 2026.
| Violation type | Fixed cap | % of turnover | Article |
|---|---|---|---|
| Article 5 prohibited practices | €35,000,000 | 7% | Art. 99(3) |
| High-risk obligations + transparency (Art. 50) | €15,000,000 | 3% | Art. 99(4) |
| Incorrect or misleading information to authorities | €7,500,000 | 1.5% | Art. 99(5) |
Fines apply whichever figure is higher — fixed cap or % of global annual turnover. For SMEs and startups, the lower of the two figures applies (Art. 99(6)).
Full penalties and enforcement guideIs your AI system high-risk?
Our analysis tool cross-references your AI use case against Annex III, Art. 6(1), and the Art. 6(3) derogation criteria — so you know exactly where you stand.
Check my system — 3 free analyses includedCOM(2025) 836 — Digital Omnibus proposal (not yet enacted)
On 19 November 2025, the European Commission proposed COM(2025) 836, which would amend the EU AI Act's timeline to reduce compliance burden while harmonised standards are finalised.
New delay mechanism (proposed Art. 113(3)(d))
High-risk AI obligations in Chapter III Sections 1–3 would apply only after the Commission confirms that adequate harmonised standards or common specifications are available, then:
This is a legislative proposal — the 2 August 2026 deadline remains the current applicable law. Until 836 is enacted, organisations must plan for August 2026.
COM(2025) 837 — GDPR changes that affect AI compliance (not yet enacted)
The companion proposal COM(2025) 837 (Data and Privacy Omnibus) introduces GDPR amendments that directly reduce compliance friction for AI system developers and operators.
New GDPR Art. 9(2)(k) — special category processing exemption
Processing special categories of personal data (health, biometric, political opinions, etc.) would be permitted “in the context of the development and operation of an AI system or AI model” — subject to obligations to avoid collecting special data in training sets and to remove it when found. This removes a major GDPR blocker for AI training on real-world datasets.
Revised GDPR Art. 22 — automated decision-making clarified
Automated decisions necessary for entering into or performing a contract would be explicitly lawful “regardless of whether the decision could be taken otherwise than by solely automated means” — resolving a long-standing tension with AI Act-style human oversight requirements. Controllers must still offer the least-intrusive automated solution where alternatives exist.
COM(2025) 837 is a legislative proposal under ordinary legislative procedure and has not yet been enacted.
This guide covers the full structure of the EU AI Act. Where you land within it — prohibited, high-risk, limited risk, GPAI, or minimal — determines your exact obligations, your fine exposure, and the deadlines that apply to you.
Describe your AI system in plain language. Regumatrix checks it against every article of the EU AI Act and returns your risk tier, Annex classification, the exact obligations that apply, and your fine exposure under Article 99. Eight sections. About 30 seconds.
8-section report · Article citations · ~30 seconds · No credit card