RegumatrixBeta
GuidesPathfinderAI RightsFreeAbout
Sign inGet Started Free

Reference

  • All Articles
  • Official Text ↗

Compliance Guides

  • Compliance Timeline
  • High-Risk Checklist
  • Healthcare AI
  • HR & Recruitment
  • Financial Services
  • GPAI / Foundation Models
  • View all guides →

Product

  • Risk Pathfinder
  • AI Rights Check
  • Get Started Free
  • About
  • Feedback
  • Contact

Legal

  • Privacy Policy
  • Terms

Regumatrix — AI compliance powered by Regulation (EU) 2024/1689

This tool is informational only and does not constitute legal advice.

Grounded in Regulation (EU) 2024/1689 · verified 4 Apr 2026
HomeComplianceAI Deployer Obligations
You use AI built by someone elseUp to €15M or 3% of global turnover (Art. 99)

AI Deployer Obligations: Your Complete EU AI Act Checklist

If your business uses a high-risk AI system professionally — and you did not build it — you are a “deployer” under Article 3(4). Article 26 sets your obligations. They apply from 2 August 2026. Here is every one, in the order you need to meet them.

Deployer violations carry Article 99 penalties from day one.

Breaching Article 26 obligations — such as failing to assign human oversight or skipping worker notification — triggers fines of up to €15,000,000 or 3% of global annual turnover (whichever is higher for large companies). Transparency breaches under Article 50 — including deepfake labelling — carry the same up to €15,000,000 or 3% under Article 99(4). For SMEs and SMCs, Article 99(6) applies the lower of the two figures — build that into your risk calculation.

Article 26 — all deployers

4 months away

836 proposal — public authority deployers only

53 months away (if 836 passes)

Not sure whether your AI tool is high-risk?

Regumatrix checks your system against all Annex I and Annex III categories, returns your risk tier, and lists every Article 26 deployer obligation that applies — plus your fine exposure under Article 99. Takes about 30 seconds.

Check my AI tool — 3 free analyses included
BEFORE DEPLOYMENT

Steps to complete before you switch it on

Complete these before you put the high-risk AI system into service. If you skip any, you are non-compliant from day one.

Art. 26(2)

Assign and train oversight persons

Before go-live, identify who is responsible for human oversight. They need the competence, training, and authority to understand the system's outputs, detect when something is wrong, and act on it. This is not a formality — the oversight role is live and ongoing from the moment the system operates.

Art. 26(7)

Notify workers and their representatives

If you are an employer, tell workers' representatives and affected workers before you put the high-risk AI system into service. This applies to any AI system used in hiring, performance monitoring, output evaluation, or shift scheduling. Follow the applicable Union and national consultation rules — in many Member States, works councils have co-determination rights that bind you before deployment.

Art. 26(8)

Check EU database registration — public bodies only

Public authorities and Union bodies must confirm the system is registered in the EU AI database before using it. If it is not registered, do not use it — and notify the provider or distributor. Registration for critical infrastructure systems (Annex III point 2) happens at national level, not in the central EU database.

Art. 26(9)

Use Art. 13 documentation to prepare your DPIA

Providers must supply instructions documenting the system's capabilities, limitations, and monitoring guidance under Article 13. Use that documentation when you run your GDPR Article 35 data protection impact assessment. If your provider has not supplied enough detail to complete the DPIA, request it — your deployment schedule depends on having it before go-live.

DURING OPERATION

Continuous obligations while the system is live

These apply every day the system operates — not just at launch.

Art. 26(1)

Use the system per the instructions

Operate the AI system strictly in accordance with the provider's instructions for use. Deploying it for purposes not covered in those instructions is a compliance breach — and it may trigger Article 25(1) provider status, with the full provider obligation set attached.

Art. 26(4)

Input data quality — if you control inputs

If you control what data goes into the system, make sure it is relevant and sufficiently representative for the system's intended purpose. This obligation stacks on top of the provider's dataset obligations under Article 10 — it is an independent deployer compliance requirement, not just a technical matter.

Art. 26(5)

Monitor the system and report incidents

Monitor operation against the instructions for use. If the system presents a risk under Article 79(1), suspend use immediately and notify the provider or distributor and the national market surveillance authority without undue delay. For serious incidents, notify the provider first, then importers, distributors, and authorities. Financial institutions subject to Union financial services governance rules satisfy the monitoring obligation by complying with those rules.

Art. 26(6)

Keep operational logs for at least 6 months

Retain the logs automatically generated by the system for at least 6 months — provided those logs are under your control. Applicable law may require a longer period. Financial institutions must keep logs as part of their internal governance documentation under Union financial services law.

Art. 26(11)

Inform individuals the system affects them

If you use an Annex III high-risk system that makes or assists in decisions about natural persons, tell those persons they are subject to it. This obligation applies on top of Article 50 transparency rules and GDPR Article 22 rights. For law enforcement uses, Article 13 of Directive 2016/680 governs instead.

Art. 26(12)

Cooperate with competent authorities

Cooperate with any action the relevant national competent authority takes relating to your use of the system. Provide documentation, log access, and information when requested. Authorities must treat the information as confidential under Article 78.

Law enforcement deployers — additional obligation: Article 26(10) requires deployers using post-remote biometric identification systems to obtain judicial or administrative authorisation before each use — ex ante, or within 48 hours — limited to specific criminal investigations. Annual reports to market surveillance and data protection authorities are also required.

Fundamental Rights Impact Assessment (Article 27)

Article 27 requires certain deployers to run a structured assessment of impact on fundamental rights before first deployment. It stacks on top of — and does not replace — your GDPR data protection impact assessment under Article 35 of Regulation 2016/679 (Art. 27(4)).

Who must do a FRIA?

  • → Bodies governed by public law — government agencies, public hospitals, public universities
  • → Private public-service providers — private entities delivering public services using any Annex III high-risk system
  • → Credit and insurance AI deployers — deployers using Annex III point 5(b) (creditworthiness assessment) or point 5(c) (life and health insurance risk scoring) systems

Critical infrastructure operators (Annex III point 2) are explicitly exempt from Article 27.

What a FRIA must cover — Art. 27(1)(a)–(f)

  • (a) The processes in which the system will be used
  • (b) Time period and frequency of use
  • (c) Categories of persons and groups likely to be affected
  • (d) Specific risks of harm to those categories, using the provider's Art. 13 documentation
  • (e) How human oversight will be implemented
  • (f) Measures to respond to risks — including internal governance and complaint mechanisms

Report the results to the relevant market surveillance authority before first deployment (Art. 27(3)). The AI Office will publish a questionnaire template to simplify the process (Art. 27(5)).

Additional transparency obligations (Chapter IV)

These apply to users of certain AI system types regardless of whether the system is classified as high-risk.

Art. 50(3)

Disclose emotion recognition or biometric categorization

If you deploy an AI system that detects emotions or categorises people by biometric characteristics, inform the natural persons exposed to it. Process their personal data in accordance with GDPR, Regulation 2018/1725, and Directive 2016/680 as applicable. The exception: systems used for lawfully authorised criminal investigation. Violations carry fines up to €15M or 3% of global turnover under Article 99(4).

Art. 50(4)

Label deepfakes and AI-generated public interest text

If you use an AI system that produces or manipulates images, audio, or video constituting a deepfake, disclose the artificial origin — at the time of first exposure. For evidently artistic or satirical content, label it without blocking the display of the work. For AI-generated text on matters of public interest published to the public, disclose the AI origin — unless a human reviewed the text and a person holds editorial responsibility for its publication.

Warning: six situations where a deployer becomes a provider

Article 25(1) shifts the full provider obligation set onto you if any of these apply:

  • → You put your company name or trademark on a third-party AI system
  • → You changed the system's intended purpose after purchase
  • → You made substantial modifications to the system's design or training
  • → You integrated a GPAI model into a product and sell it under your brand
  • → You are a product manufacturer who added an AI safety component
  • → You contractually agreed in writing to assume provider obligations

In all six situations you become the provider for purposes of Article 16 — with conformity assessment, CE marking, and EU database registration attached. Provider obligations are set out in Articles 9–21, 43, and 47–49.

Check whether you are a deployer or a provider — free
PROPOSAL — not yet enacted lawCOM(2025) 836 — Digital Omnibus on AI

Extended deadline for public authority deployers (836 Art 1 pt30)

COM(2025) 836 proposes adding a hard deadline of 2 August 2030 for providers and deployers of high-risk AI systems intended to be used by public authorities. If enacted, those providers and deployers would have until 2 August 2030 to comply with all AI Act requirements — a four-year extension beyond the general 2 August 2026 deadline.

The extension covers both the public authority as deployer and any provider whose system is specifically intended for public-authority use. It does not extend Article 5 prohibited AI obligations — those have applied since 2 February 2025 and are not subject to any transitional provision.

Status: COM(2025) 836 is in trilogue as of early 2026 and is not yet enacted law.

No changes are proposed under COM(2025) 837 for this topic. COM(2025) 836 changes are covered above.

Frequently asked questions

Do Article 26 obligations apply to every AI tool I use?▾

No — only when you use a high-risk AI system under Article 6. High-risk means: AI safety components in regulated products listed in Annex I (medical devices, vehicles, civil aviation, machinery) or AI systems in the 8 domains of Annex III (biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, justice). Standard chatbots, recommendation engines, spam filters, and general office automation are not high-risk. If your tool is outside those categories, Article 26 does not apply.

What is the difference between a deployer and a provider under the EU AI Act?▾

A provider builds the AI system and places it on the market — their obligations run through Articles 9–21, 43, and 47–49: risk management, technical documentation, CE marking, conformity assessment, EU database registration. A deployer uses a high-risk AI system built by someone else, in a professional context, under their own authority. Their obligations are narrower: correct use, human oversight, log retention, worker notification, incident reporting under Article 26. The line can shift: under Article 25(1), a deployer becomes a provider if they rebrand the system under their name, make substantial modifications, or change its intended purpose so that new high-risk uses arise.

Does my organisation need a Fundamental Rights Impact Assessment (FRIA)?▾

Article 27 makes the FRIA mandatory for three categories: (1) bodies governed by public law — government agencies, public hospitals, public universities; (2) private entities delivering public services and using an Annex III high-risk system; and (3) deployers running Annex III point 5(b) or 5(c) systems — AI for creditworthiness or life and health insurance risk assessment. Critical infrastructure operators (Annex III point 2) are exempt from Article 27. The FRIA must cover six elements: processes in which the system will be used, time period and frequency, categories of affected persons, specific risks to those persons, oversight implementation plan, and measures to respond to those risks. Results must be reported to the relevant market surveillance authority before first deployment.

Must I notify employees before deploying AI in the workplace?▾

Yes. Article 26(7) requires deployers who are employers to inform workers' representatives and the directly affected workers before a high-risk AI system is put into service at the workplace. This applies to any AI that monitors performance, evaluates output, schedules work, or processes employment-relevant data. The notification must follow applicable Union and national information and consultation rules. In Germany, Austria, and the Netherlands, for example, works councils typically have co-determination rights — not just a right to receive notice — before the system goes live.

When must I disclose that AI generated or manipulated content?▾

Under Article 50(4), if you use an AI system to produce images, audio, or video constituting a deepfake, you must disclose that the content is artificially generated — at the time of first exposure. Two exceptions apply: the use is authorised by law for criminal investigation; or the content is evidently artistic or satirical, in which case you only need to signal that AI was used without blocking the work. Separately, if you publish AI-generated text on matters of public interest, you must disclose it as AI-generated — unless a human reviewed it and a person holds editorial responsibility for the publication. Violations of Article 50 carry fines up to €15M or 3% of global turnover under Article 99(4).

Related compliance guides

AI Provider Obligations (Arts 9–21, 43, 47–49)Is My AI High-Risk? ChecklistHuman Oversight Requirements (Article 14)EU AI Act Fines & Penalties (Article 99)EU AI Act + GDPR: How the Two InteractEU AI Act Compliance Timeline 2025–2030

Find out exactly which deployer obligations apply to your AI system

Regumatrix analyses your AI system and returns: risk tier, Annex classification, every Article 26 obligation that applies, whether you need a FRIA under Article 27, your fine exposure under Article 99, and an 8-section cited compliance report. Takes about 30 seconds.

Start free — 3 analyses included

No credit card required