Every guide you need to understand and meet EU AI Act obligations — organised by industry, risk level, obligation, and role. Updated for the 2025 Digital Omnibus proposals (COM 836 & COM 837).
Not sure where to start?
Does the AI Act apply to me?
Check whether your organisation, system, or use case is in scope — or covered by an exemption.
Scope & exclusionsIs my AI high-risk?
Work through the Article 6 + Annex III classification test to determine your risk tier and obligation level.
Risk classificationI use AI — someone else built it
If you deploy AI in your organisation but did not develop it, you are a “deployer” with a distinct set of obligations under Article 26.
Deployer obligationsOverview guides, the full compliance timeline, and key definitions — the right starting point for any organisation.
What the regulation is, who it applies to, the 4 risk tiers, and how to navigate it.
Every enforcement date in one place — Feb 2025, Aug 2026, and 836 fallback dates.
Provider, deployer, GPAI, systemic risk, intended purpose — all defined with article refs.
Military AI, personal use, pure research, open-source exceptions — what the Act doesn't cover.
Is your sector's AI high-risk? Each guide covers classification, exact obligations, and the specific Annex III domain or product-safety pathway that applies.
Diagnostic tools, clinical decision support, and surgical AI — Annex III §5 + MDR/IVDR pathway.
CV screening, candidate ranking, employee monitoring — high-risk under Annex III §4.
Creditworthiness AI, loan decisioning, insurance pricing — Annex III §5.
Student admission systems, exam proctoring, AI tutoring — high-risk AND chatbot transparency rules.
Face recognition, iris scanning, emotion detection — notified body required, some uses prohibited.
Life insurance pricing, health underwriting, claims assessment — separate guide from broad financial services.
Benefits eligibility, social services, tax AI — FRIA mandatory, extended 2030 deadline for public deployers.
Recidivism scoring, criminal profiling, polygraph — notified body mandatory, narrow Art 5 exception.
AI assisting judges, contract analysis, legal research tools — Annex III §8 and right to explanation.
Power grid, water utilities, road traffic management, SCADA — high-risk under Annex III §2.
Asylum processing, visa risk assessment, border crossing detection — overlaps with law enforcement.
AI as a medical device — MDR/IVDR + AI Act dual compliance. CE marking via notified body.
AI safety components in type-approved vehicles under Regulation 2019/2144 — Annex I pathway.
Annex I pathway + 7 technical corrections to EASA Regulation 2018/1139 under COM 836.
The four tiers — prohibited, high-risk, limited risk, minimal risk — and how to classify your AI system correctly.
All 8 Annex III domains + the Article 6(1) product safety track — one-page decision guide.
All 8 banned uses in plain English — social scoring, facial scraping, real-time biometric ID, emotion recognition at work and school.
Chatbot disclosure, deepfake labelling, AI-generated text marking — the limited-risk tier.
The narrow exception, 4-condition test, profiling void rule — and the €15M misclassification penalty.
Spam filters, recommenders, basic automation — what's fully out of scope and voluntary codes under Art 95.
Deep-dive guides for each mandatory requirement. If you've confirmed you're high-risk, start here.
Iterative process: identify, estimate, evaluate, mitigate. How to document foreseeable misuse.
What 'effective oversight' means legally — HITL, override controls, who must be assigned.
Self-assessment (Annex VI) vs notified body (Annex VII) — when each applies, who decides.
What 17+ elements must be documented before market placement — SME simplified form.
Who must register, what data to submit (Annex VIII), when — and what COM 836 removes.
Who must do it: public bodies + specific deployers. What to assess and when to register.
Training/validation/testing dataset obligations, bias handling, sensitive data rules.
Continuous monitoring, serious incident reporting obligations and what 836 changes.
What a QMS must cover: design, development, testing, monitoring — SME proportionality.
EU declaration of conformity (Annex V) then CE marking — which AI systems need it.
Obligations depend on your role in the AI supply chain. Find the guide for your position.
Full checklist for developers/vendors — Arts 9–21, conformity, CE marking, registration, post-market.
'I bought an AI tool' — what deployers must do: oversight, logging, worker notification, FRIA.
Extraterritorial reach, authorised representative requirement, obligations for US/UK/Asian providers.
Verifying CE marking, forbidden placing of non-compliant systems, labelling requirements.
Integrating AI into your product under your own name = treated as provider. Obligations follow.
COM(2025) 836 and COM(2025) 837 propose significant amendments to the AI Act and GDPR. These pages explain what changes and what it means for your compliance programme.
Plain-English summary: SME relief, deadline delays, notified body simplification, AI Office expansion.
Simplified docs, lighter QMS, fine cap (lower of %, not higher), sandbox priority, new SMC category.
Art 22 automated decisions, Art 9 sensitive data, Art 86 right to explanation — the two regimes side by side.
GDPR AI training exemption, automated decisions, cookie consent reform, 96h breach notification.
New Art 9(2)(k) lawful basis for special category data, conditions, bias detection use.
When GDPR Art 22 applies, 837 update for contract automation, Art 14/26 oversight obligations.
Machine-readable signals (Arts 88a/88b), migration from ePrivacy to GDPR framework.
Extended timeline, ENISA single-entry reporting across NIS2, GDPR, DORA, eIDAS, CER.
Penalty structure, enforcement mechanisms, and individual rights — what's at stake for non-compliance.
€35M/7% for prohibited AI, €15M/3% for high-risk violations, €7.5M/1.5% for incorrect info — plus SME cap.
Market surveillance authorities, AI Office vs national regulators, investigation powers, recall.
Individuals' right to ask deployers how an AI made a decision — when it applies and what you must provide.
GPAI systemic risk, agentic AI, open-source exemptions, emotion recognition, regulatory sandboxes — specialist topics for technical and legal teams.
Foundation models and LLMs — Chapter V obligations, copyright policy, downstream provider duties.
10²⁵ FLOPs threshold — adversarial testing, AI Office incident reporting, energy consumption.
Art 53(2) exemptions for open-weights models, conditions that remove the exemption.
New Annex XIV classification codes (836), legal definition of agentic AI, multi-step autonomous systems.
How to apply, SME priority access, new EU-level sandbox added by 836, real-world testing.
Prohibited in workplace/education (Art 5), transparent elsewhere (Art 50), high-risk in other contexts.
Public and private social scoring both prohibited. What counts. Penalty: €35M/7%.
Labelling obligations, Art 50(4) disclosure rule, satire exceptions, 836 watermarking deadline.
Art 77 authority powers, FRIA (Art 27), right to complain (Art 85) and right to explanation (Art 86).
For compliance teams navigating multiple EU regulations simultaneously — where the AI Act overlaps with GDPR, DSA, NIS2, and DORA.
Scope, who it covers, data rights, automated decisions, and sanctions — side by side. Do you need both?
Recommendation systems, VLOPs, content moderation — and 836's AI Office exclusive competence.
Annex III HR-2 + NIS2 essential entity requirements — two obligations on the same system.
AI Act Annex III §5 + DORA ICT risk — financial sector AI caught by both regimes.
Describe your AI system in a sentence. Get back: risk tier, applicable Annex, every obligation with article citations, required actions before August 2026, and fine exposure under Article 99. Eight structured sections. About 30 seconds. 3 free analyses, no credit card.