RegumatrixBeta
GuidesPathfinderAI RightsFreeAbout
Sign inGet Started Free

Reference

  • All Articles
  • Official Text ↗

Compliance Guides

  • Compliance Timeline
  • High-Risk Checklist
  • Healthcare AI
  • HR & Recruitment
  • Financial Services
  • GPAI / Foundation Models
  • View all guides →

Product

  • Risk Pathfinder
  • AI Rights Check
  • Get Started Free
  • About
  • Feedback
  • Contact

Legal

  • Privacy Policy
  • Terms

Regumatrix — AI compliance powered by Regulation (EU) 2024/1689

This tool is informational only and does not constitute legal advice.

Grounded in Regulation (EU) 2024/1689 · verified 4 Apr 2026
HomeComplianceEU AI Act Complete Guide
Regulation (EU) 2024/1689In force since 1 August 2024836 updates included

EU AI Act: The Complete Guide

The EU AI Act (Regulation (EU) 2024/1689) is the world's first comprehensive AI regulation — a directly applicable law that classifies AI by risk level and penalises violations with fines up to €35 million or 7% of global turnover. This guide covers everything: who it applies to, the 5 risk tiers, the 8 high-risk domains, compliance deadlines, and what COM(2025) 836 proposes to change.

Key facts at a glance

2 Aug 2026

Main compliance deadline (most organisations)

€35M / 7%

Maximum fine for prohibited AI practices

113 articles

Plus 13 annexes, applying across the EU

Not sure if the AI Act applies to your organisation?

Run a free AI compliance analysis — our tool maps your AI use cases to the relevant articles in under 5 minutes.

Analyse my system free — 3 checks included

What is the EU AI Act?

The EU AI Act is a directly applicable European Union regulation (Regulation (EU) 2024/1689) that sets harmonised rules for the development, placing on the market, and use of AI systems. It was published in the Official Journal on 12 July 2024 and entered into force on 1 August 2024.

Under Article 1, the regulation has seven stated purposes: creating a single legal framework for AI in the EU; prohibiting AI practices that pose unacceptable risks; establishing requirements for high-risk AI; imposing transparency obligations; setting rules for general-purpose AI models; establishing governance and market surveillance mechanisms; and supporting innovation by SMEs.

The AI Act defines an AI system (Article 3(1)) as a machine-based system that, for a given set of objectives, operates with varying levels of autonomy and infers outputs — such as predictions, content, recommendations, or decisions — that can influence real or virtual environments.

Who does the EU AI Act apply to?

Article 2 sets a broad territorial scope. The regulation applies to any of the following who operate within or direct their activities toward the EU:

ProviderArt. 3(3)

Develops and places AI on the EU market under their own name/trademark — bears the primary compliance burden.

DeployerArt. 3(4)

Uses an AI system in their own operations under their authority. Includes businesses using AI-powered SaaS products that qualify as high-risk.

ImporterArt. 3(6)

Brings AI systems from third-country providers onto the EU market.

DistributorArt. 3(7)

Makes AI systems available on the EU market without being the provider.

Product manufacturerArt. 3(15)

Integrates an AI system as a safety component of a regulated product (e.g., a medical device, vehicle, or machinery).

Non-EU providersArt. 2(1)(c)

Providers and deployers established outside the EU where the AI system's output is used within the EU.

Key exclusions

  • Military & national security (Art. 2(3)) — Fully excluded — defence applications fall entirely outside the regulation.
  • Pure scientific R&D (Art. 2(6)) — Research and development before market placement is excluded — but real-world testing in live environments is not.
  • Personal non-professional use (Art. 2(10)) — Individuals using AI for their own personal purposes only (not business) are out of scope.
  • Free and open-source AI (Art. 2(12)) — Open-source AI models are excluded — unless placed on market as high-risk, used in Article 5 prohibited practices, or subject to Article 50 transparency obligations.

The 5 risk tiers

Your obligations — and the penalty for non-compliance — depend entirely on which tier your AI falls into.

ProhibitedArt. 5— 8 practices

Outright bans: social scoring, subliminal manipulation, facial scraping, emotion AI at work/school, criminal personality profiling, real-time biometric ID.

Max penalty: €35M / 7% of global annual turnover (whichever is higher, lower for SMEs)

High-riskArt. 6 + Annex III— 8 domains

AI in biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, and justice — plus AI safety components in regulated products.

Max penalty: €15M / 3% of global annual turnover (whichever is higher, lower for SMEs)

Limited riskArt. 50— Transparency only

Chatbots must disclose they're AI. Deepfake content must be labelled. Emotion recognition and biometric categorisation systems must notify users.

Max penalty: €15M / 3% of global annual turnover (whichever is higher, lower for SMEs)

General-purpose AIArts. 51–56— >10²⁵ FLOPs = systemic

GPAI model providers must maintain technical documentation, a copyright policy, and a training data summary. High-impact models (≥10²⁵ FLOPs) additionally face adversarial testing and incident reporting.

Max penalty: €15M / 3% of global annual turnover (whichever is higher, lower for SMEs)

Minimal riskArt. 95— Most AI tools

The vast majority of AI — spam filters, recommendation engines, inventory tools, basic chatbots used internally — falls here. Only voluntary codes of conduct apply.

Article 5 is already in force — since 2 February 2025

The 8 prohibited AI practices have been enforceable since 2 February 2025. These are not future rules. If your organisation is using any of these practices today, you are already in violation.

See all 8 banned practices with examples

The 8 high-risk domains (Annex III)

AI systems listed in Annex III are automatically classified as high-risk under Art. 6(2). There is a derogation in Art. 6(3) — AI is not high-risk if it performs only a narrow procedural task, simply improves a prior human activity, or only prepares for a human decision — but profiling of natural persons is always high-risk regardless of this derogation.

A second category of high-risk AI (Art. 6(1)) covers AI that is a safety component of a product requiring third-party conformity assessment under Annex I legislation (e.g., medical devices, machinery, vehicles, aviation equipment).

HR-1Biometrics
Guide →
  • •Remote biometric identification (RBI) systems
  • •Biometric categorisation inferring sensitive attributes
  • •Emotion recognition systems
HR-2Critical Infrastructure
Guide →
  • •Safety components of critical digital infrastructure
  • •Road traffic management systems
  • •Water, gas, heating, electricity supply systems
HR-3Education
  • •AI determining access to or admissions to education
  • •Systems evaluating learning outcomes
  • •Monitoring students during examinations
HR-4Employment
  • •CV screening and recruitment ranking tools
  • •Decisions on promotion, termination, task allocation
  • •Performance monitoring and evaluation systems
HR-5Essential Services
Guide →
  • •Public authority benefit/healthcare eligibility AI
  • •Creditworthiness scoring (except fraud detection)
  • •Life and health insurance risk assessment and pricing
  • •Emergency call classification and dispatching
HR-6Law Enforcement
Guide →
  • •Victim risk and criminal recidivism assessment
  • •Evidence reliability evaluation tools
  • •Criminal profiling AI
HR-7Migration & Border
Guide →
  • •Asylum, visa and residence permit applications
  • •Border entry risk assessment and polygraphs
  • •Document verification AI
HR-8Justice & Elections
  • •AI assisting judicial authorities (research, interpret, apply law)
  • •AI intended to influence election or referendum outcomes

Article 50: Transparency obligations

Even if your AI system is not high-risk, Article 50 may impose transparency obligations on you or your users. These apply from 2 August 2026 and cover four scenarios:

Chatbots and AI-to-human interaction▾
Providers must design AI systems that interact directly with people so that users are informed they are talking to an AI — unless it is obvious to a reasonably well-informed user.
Synthetic content (AI-generated media)▾
Providers of AI that generates synthetic audio, images, video, or text must mark outputs in a machine-readable format so they are detectable as AI-generated. Applies to GPAI models like large language models.
Deepfakes▾
Deployers of AI that creates deepfakes (AI-generated or manipulated images/video/audio of real people) must disclose this — unless used for clearly artistic, satirical, or fictional purposes with appropriate labelling.
Emotion and biometric categorisation systems▾
Deployers must inform people when they are being monitored by emotion recognition or biometric categorisation AI, and must comply with GDPR.

General-purpose AI models (Arts. 51–56)

Chapter V creates a dedicated regime for general-purpose AI (GPAI) models — large foundation models trained on vast data that can perform a wide range of tasks. These rules applied from 2 August 2025.

All GPAI model providers must maintain technical documentation (Annex XI), make capability information available to AI system providers that integrate them, adopt a copyright compliance policy, and publish a training data summary. Open-source GPAI model providers are exempt from the documentation and disclosure obligations — unless their model has systemic risk.

What triggers systemic risk status?

Under Article 51, a GPAI model is presumed to have systemic risk if the cumulative computation used in training exceeds 10²⁵ floating-point operations (FLOPs). Models with systemic risk face additional obligations: adversarial testing, incident monitoring and reporting to the AI Office, cybersecurity safeguards, and reporting on energy consumption.

Read the full GPAI systemic risk guide

Compliance timeline

Article 113 sets these dates directly. Proposed changes from COM(2025) 836 would adjust some deadlines — those entries are marked as proposed.

1 Aug 2024In effect

Regulation enters into force

Regulation (EU) 2024/1689 published in the OJ and took effect.

Art. 113

2 Feb 2025In effect

Prohibited AI ban applies

All 8 Article 5 prohibited practices are now enforceable. Chapter I (definitions and scope) also applies.

Art. 113(3)(a)

2 Aug 2025In effect

GPAI rules and governance apply

Chapter V (GPAI model obligations), Chapter VII (governance and AI Office), and Article 78 (confidentiality) all come into force.

Art. 113(3)(b)

2 Aug 2026

Main compliance deadline

The bulk of the regulation applies — high-risk AI obligations, conformity assessments, EU database registration, transparency obligations (Art. 50), and penalties.

Art. 113

2 Aug 2027

Article 6(1) Annex I products

AI as safety components inside regulated products (medical devices, machinery, vehicles, aviation) must comply — original AI Act deadline.

Art. 113(3)(c)

2 Dec 2027Proposed (836)

836 fallback — Annex III

COM(2025) 836 proposes: if the Commission has not confirmed adequate harmonised standards are available, Annex III / Art. 6(2) high-risk obligations apply from this date.

836 Art. 113(d)

2 Aug 2028Proposed (836)

836 fallback — Annex I products

COM(2025) 836 proposes: if standards not confirmed, AI in regulated products (Art. 6(1)) must comply from this date instead of August 2027.

836 Art. 113(d)

2 Aug 2030

Public authority deployers

Providers and deployers of high-risk AI intended for public authority use must achieve full compliance — extended deadline under Article 111(2).

Art. 111(2)

For the full timeline with countdown timers, see the EU AI Act compliance timeline guide.

Obligations by role

The AI Act assigns different obligations depending on your role in the AI value chain. Being a deployer — e.g., a company using a third-party AI service — does not exempt you from the regulation. Deployers of high-risk AI have significant direct obligations.

Provider obligations (high-risk AI)

  • Risk management system (Art. 9)
  • Data governance for training data (Art. 10)
  • Full technical documentation (Art. 11)
  • Record-keeping / logging capability (Art. 12)
  • Transparency and user information (Art. 13)
  • Human oversight by design (Art. 14)
  • Accuracy, robustness, cybersecurity (Art. 15)
  • Conformity assessment & CE marking (Arts. 43–49)
  • EU database registration (Art. 71)
  • Post-market monitoring (Art. 72)
Full provider obligations guide

Deployer obligations (high-risk AI)

  • Use AI in accordance with provider instructions (Art. 26(1))
  • Assign human oversight to qualified staff (Art. 26(2))
  • Fundamental rights impact assessment (Art. 27)
  • Monitor AI for risks and report to provider (Art. 26(5))
  • Inform employees where AI affects their work (Art. 26(7))
  • Notify authority of serious incidents (Art. 73)
  • Maintain use logs for 6 months minimum (Art. 26(6))
  • AI literacy for all staff involved (Art. 4)
Full deployer obligations guide

Penalties (Article 99)

Administrative fines apply to every violation — including technical requirements, documentation gaps, and transparency failures. Art. 5 fines have applied since February 2025; all others from August 2026.

Violation typeFixed cap% of turnoverArticle
Article 5 prohibited practices€35,000,0007%Art. 99(3)
High-risk obligations + transparency (Art. 50)€15,000,0003%Art. 99(4)
Incorrect or misleading information to authorities€7,500,0001.5%Art. 99(5)

Fines apply whichever figure is higher — fixed cap or % of global annual turnover. For SMEs and startups, the lower of the two figures applies (Art. 99(6)).

Full penalties and enforcement guide

Is your AI system high-risk?

Our analysis tool cross-references your AI use case against Annex III, Art. 6(1), and the Art. 6(3) derogation criteria — so you know exactly where you stand.

Check my system — 3 free analyses included

COM(2025) 836 — Digital Omnibus proposal (not yet enacted)

On 19 November 2025, the European Commission proposed COM(2025) 836, which would amend the EU AI Act's timeline to reduce compliance burden while harmonised standards are finalised.

New delay mechanism (proposed Art. 113(3)(d))

High-risk AI obligations in Chapter III Sections 1–3 would apply only after the Commission confirms that adequate harmonised standards or common specifications are available, then:

  • •6 months after Commission decision → Annex III / Art. 6(2) systems (fallback: 2 December 2027)
  • •12 months after Commission decision → Annex I product systems (fallback: 2 August 2028)

This is a legislative proposal — the 2 August 2026 deadline remains the current applicable law. Until 836 is enacted, organisations must plan for August 2026.

Read the full Digital Omnibus 836 guide

COM(2025) 837 — GDPR changes that affect AI compliance (not yet enacted)

The companion proposal COM(2025) 837 (Data and Privacy Omnibus) introduces GDPR amendments that directly reduce compliance friction for AI system developers and operators.

New GDPR Art. 9(2)(k) — special category processing exemption

Processing special categories of personal data (health, biometric, political opinions, etc.) would be permitted “in the context of the development and operation of an AI system or AI model” — subject to obligations to avoid collecting special data in training sets and to remove it when found. This removes a major GDPR blocker for AI training on real-world datasets.

Revised GDPR Art. 22 — automated decision-making clarified

Automated decisions necessary for entering into or performing a contract would be explicitly lawful “regardless of whether the decision could be taken otherwise than by solely automated means” — resolving a long-standing tension with AI Act-style human oversight requirements. Controllers must still offer the least-intrusive automated solution where alternatives exist.

COM(2025) 837 is a legislative proposal under ordinary legislative procedure and has not yet been enacted.

Frequently asked questions

What is the EU AI Act?▾
The EU AI Act (Regulation (EU) 2024/1689) is a directly applicable EU regulation that sets harmonised rules for AI systems placed on the EU market or used in the EU. It classifies AI by risk level — from prohibited practices to high-risk systems to minimal-risk tools — and imposes compliance obligations on providers (who develop and market AI) and deployers (who use AI in their operations). The regulation entered into force on 1 August 2024.
Who does the EU AI Act apply to?▾
The EU AI Act applies to providers placing AI on the EU market (including non-EU providers), deployers using AI in the EU, importers and distributors of AI systems, and product manufacturers who integrate AI as a safety component. It also captures providers and deployers located outside the EU if their AI system's output is used in the EU. Exclusions include military/defence use, personal non-professional use, and pure scientific R&D.
What are the 5 risk tiers in the EU AI Act?▾
The EU AI Act has five risk tiers: (1) Prohibited — 8 practices banned outright under Article 5 including social scoring, subliminal manipulation, and facial scraping; (2) High-risk — AI in 8 Annex III domains (biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, justice) plus AI as safety components in regulated products; (3) Limited risk — AI with transparency obligations under Article 50 (chatbots, deepfakes, emotion/biometric systems); (4) General-purpose AI (GPAI) — large models subject to separate obligations under Articles 51–56; (5) Minimal risk — all other AI, with only voluntary codes of conduct.
When does the EU AI Act apply to my organisation?▾
Key dates: Article 5 prohibited practices have been enforceable since 2 February 2025. GPAI model obligations apply from 2 August 2025. The main compliance deadline for most high-risk AI systems is 2 August 2026. AI as a safety component in regulated products (Annex I) must comply by 2 August 2027 under the original regulation (COM(2025) 836 proposes extending this to 2 August 2028 subject to Commission confirmation). Public authority deployers of high-risk AI have until 2 August 2030.
What is the difference between a provider and a deployer under the EU AI Act?▾
A provider (Article 3(3)) develops and places an AI system on the market or puts it into service under their own name or trademark — they hold the primary compliance responsibility including technical documentation, conformity assessment, CE marking, and EU database registration. A deployer (Article 3(4)) uses an AI system under their own authority — their obligations are lighter but include conducting fundamental rights impact assessments for high-risk AI, ensuring human oversight, and monitoring AI performance. Both must maintain AI literacy under Article 4.

Explore the EU AI Act in depth

Prohibited AI Practices (Art. 5)Compliance Timeline 2025–2030AI Act Penalties GuideProvider Obligations (Arts. 16–27)Deployer Obligations (Art. 26)GPAI & Systemic Risk (Art. 51)

Know exactly where your AI system stands — before a regulator tells you

This guide covers the full structure of the EU AI Act. Where you land within it — prohibited, high-risk, limited risk, GPAI, or minimal — determines your exact obligations, your fine exposure, and the deadlines that apply to you.

Describe your AI system in plain language. Regumatrix checks it against every article of the EU AI Act and returns your risk tier, Annex classification, the exact obligations that apply, and your fine exposure under Article 99. Eight sections. About 30 seconds.

Analyse my system free — 3 checks included →All compliance guides

8-section report · Article citations · ~30 seconds · No credit card