RegumatrixBeta
GuidesPathfinderAI RightsFreeAbout
Sign inGet Started Free

Reference

  • All Articles
  • Official Text ↗

Compliance Guides

  • Compliance Timeline
  • High-Risk Checklist
  • Healthcare AI
  • HR & Recruitment
  • Financial Services
  • GPAI / Foundation Models
  • View all guides →

Product

  • Risk Pathfinder
  • AI Rights Check
  • Get Started Free
  • About
  • Feedback
  • Contact

Legal

  • Privacy Policy
  • Terms

Regumatrix — AI compliance powered by Regulation (EU) 2024/1689

This tool is informational only and does not constitute legal advice.

Grounded in Regulation (EU) 2024/1689 · verified 4 Apr 2026
HomeComplianceHuman Oversight
High-risk AI obligation · Article 14Up to €15M / 3% · 4 months away

Human Oversight in AI Systems

Article 14 of the EU AI Act requires every high-risk AI system to be designed so natural persons can effectively oversee it. Putting a human somewhere in the process is not enough. The law names five specific capabilities that oversight must enable — and the penalty for getting this wrong is up to €15 million or 3% of global turnover.

Penalty: up to €15,000,000 or 3% of global annual turnover

Failing to build human oversight into a high-risk AI system is a violation of Art 14 and Art 26(2), both caught by Art 99(4). The fine ceiling is whichever is higher — €15 million or 3% of total worldwide annual turnover. For SMEs and startups, the Art 99(6) cap applies the lower figure instead.

Oversight failures are among the most common findings in AI audits because they are invisible until something goes wrong. The AI keeps running, the human keeps approving, and no one notices that the oversight was nominal rather than real.

Does your system need human oversight controls?

Regumatrix checks your system against every Annex III category and returns your risk tier, the exact obligations that fire, and your fine exposure — in about 30 seconds.

Check your system

The provider's design obligation — Article 14(1) to 14(4)

Art 14(1) places the foundational obligation on you as a provider: your high-risk AI system must be designed and developed — including with appropriate human-machine interface tools — so that natural persons can effectively oversee it during the entire period of use.

Art 14(2) names the purpose of that oversight: to prevent or minimise risks to health, safety or fundamental rights that may emerge both under the system's intended purpose and under conditions of reasonably foreseeable misuse.

Art 14(4) then specifies five concrete capabilities that persons assigned to oversight must be enabled to exercise. Each is a design requirement, not just a training aspiration.

Art 14(4)(a)

Understand capabilities and limitations

The oversight person must be able to properly understand what the high-risk AI system can and cannot do, and monitor its operation — including detecting anomalies, dysfunctions, and unexpected performance. Your system must surface enough information about its own behaviour to make this possible.

Art 14(4)(b)

Remain aware of automation bias

The AI Act explicitly names automation bias — the tendency to over-rely on AI output without sufficient scrutiny. This is particularly relevant where your system provides information or recommendations that humans use to make decisions. Your design must actively counter this tendency, not simply present output and assume the user will question it. Consider: does your UI make the confidence level visible? Does it prompt the user to verify before acting?

Art 14(4)(c)

Correctly interpret the output

The oversight person must be able to correctly interpret what the system produces. Your system must provide interpretation tools and methods where needed — probability scores, explainability outputs, confidence ranges, or documentation that describes what each output means and what it does not mean. Returning a bare decision without context makes real oversight impossible.

Art 14(4)(d)

Override or disregard the output

In any particular situation, the oversight person must be able to decide not to use the system — or to disregard, override or reverse its output. This is a design requirement, not just a policy commitment. If your system's workflow makes it technically difficult or socially expensive to reject the AI's recommendation, the override capability does not meaningfully exist.

Art 14(4)(e)

Intervene or stop the system — the “stop button” requirement

The oversight person must be able to intervene in the system's operation or interrupt the system through a stop button or similar procedure that brings it to a safe state. A literal hardware button is not required — a software interrupt, pause mechanism, or escalation procedure qualifies. The requirement is that the interruption leaves the system and its environment in a safe state, not in an undefined or corrupted one.

Two ways to deliver oversight — Article 14(3)

Art 14(3) gives providers two tracks for implementing oversight measures. You may use either one or both, depending on what is technically feasible and appropriate for your deployment context.

Track AArt 14(3)(a)

Built into the system by the provider

You identify the oversight measures and build them into the high-risk AI system before it is placed on the market or put into service. This is the preferred track — the five capabilities in Art 14(4) become product features, not deployer responsibilities.

Condition: only where technically feasible. Where the nature of the system makes certain oversight features impossible to embed at product level, Track B picks up the remainder.

Track BArt 14(3)(b)

Identified by provider, implemented by deployer

You identify the oversight measures before the system goes to market and specify them in the instructions for use. The deployer then implements them in their operational environment. This puts the obligation to operationalise oversight on the deployer, but you remain accountable for having identified what measures are required.

Vague instructions like "users should review outputs carefully" do not satisfy this requirement. The measures must be concrete and implementable.

The deployer's obligation — Article 26(2)

If you are a deployer using a high-risk AI system you did not build, your core human oversight obligation sits in Art 26(2). It is short and precise:

“Deployers shall assign human oversight to natural persons who have the necessary competence, training and authority, as well as the necessary support.”

Four conditions must be satisfied about the person you assign. None of them are optional.

1

Competence

The person must understand what the AI system does, how it produces its outputs, and what the outputs mean in the specific deployment context. Generic digital literacy is not enough for a system that makes high-stakes recommendations.

2

Training

The person must have been trained specifically for the oversight role. The provider's instructions for use should specify what training is required. If the provider hasn't specified this, the deployer must determine it themselves.

3

Authority

The person must have the organisational authority to actually act on what they observe — to reject an AI recommendation, to pause the system, to escalate a concern. Assigning oversight to a junior employee who has no real power to override the AI's output does not satisfy this condition.

4

Support

The person must have the necessary support — time, tools, and resources — to perform oversight effectively. A reviewer with 2 seconds per AI recommendation and 500 cases per day does not have meaningful oversight capability, regardless of their competence or authority on paper.

Important: Art 26(3) clarifies that the deployer's obligation to implement human oversight does not override other Union or national law — and the deployer retains freedom to organise its own resources and activities to put the provider's oversight measures into practice. This means you can decide operationally how oversight is structured, but you cannot eliminate it.

Special rule: biometric identification requires two-person verification

Art 14(5) adds an extra layer of oversight specifically for Annex III point 1(a) systems — remote biometric identification. For these systems, the oversight regime goes further than the five capabilities in Art 14(4).

No action or decision may be taken on the basis of an identification result unless that result has been separately verified and confirmed by at least two natural persons.

Both verifying persons must have the necessary competence, training and authority. A single reviewer — even a highly qualified one — is not sufficient. The two-person rule is a structural design requirement, not a recommendation.

Exception

The two-person requirement does not apply to high-risk AI systems used for the purposes of law enforcement, migration, border control, or asylum where Union or national law considers applying this requirement to be disproportionate in that context. This is a narrow exception — it must be grounded in law, not operational convenience.

Is your oversight genuinely effective — or just nominal?

The grey area in human oversight is not whether oversight exists — it is whether it is real. These signals often indicate your current approach would not satisfy Art 14:

  • Reviewers approve AI recommendations at rates above 95% without documented basis for rejections
  • The UI shows the AI output prominently and the override option requires extra steps or confirmation dialogs
  • Staff report feeling unable to reject AI recommendations due to workload, peer pressure, or performance metrics
  • Oversight is assigned to someone with no authority to act — a junior analyst whose rejections require senior sign-off
  • Your system does not expose confidence scores, error rates, or any indication of when the AI is operating near the edge of its training distribution
Check your compliance posture — 3 free analyses

No changes are proposed under COM(2025) 836 or COM(2025) 837 for the human oversight obligations in Articles 14 and 26(2).

Frequently asked questions

What does 'human oversight' mean under the EU AI Act?

Article 14 defines human oversight as the ability of natural persons to effectively monitor and control a high-risk AI system during use. It is not simply having a human 'in the loop' — the law requires that the person assigned oversight actually has the tools, information, competence, training and authority to understand the system's output, detect problems, interpret results, and be able to override or stop the system when necessary.

Who is responsible for human oversight — the provider or the deployer?

Both, in different ways. Article 14(3) places the design obligation on providers: you must build oversight capabilities into the system before it reaches the market, or at minimum identify what measures the deployer needs to implement. Article 26(2) places the assignment obligation on deployers: you must assign oversight to natural persons who have the necessary competence, training, authority, and support. Providers design for oversight; deployers operationalise it.

What is automation bias and why does the AI Act mention it?

Automation bias is the tendency for humans to over-rely on AI outputs — accepting them without sufficient critical scrutiny because the system 'seems accurate'. Article 14(4)(b) explicitly requires that the system enable the oversight person to remain aware of this tendency, particularly for AI systems that provide information or recommendations that humans then use to make decisions. This is one of only a handful of places in the EU AI Act where a specific cognitive risk is named by name.

Does every high-risk AI system need a physical stop button?

Article 14(4)(e) requires that the oversight person be able to 'intervene in the operation of the high-risk AI system or interrupt the system through a stop button or a similar procedure that allows the system to come to a halt in a safe state'. This does not mandate a literal physical button — a software interrupt, pause mechanism, or escalation procedure that produces the same effect qualifies. The key word is 'safe state': the interruption must leave the system and its environment in a safe condition, not simply crash or produce undefined behaviour.

What is the two-person verification rule for biometric AI?

Article 14(5) applies specifically to Annex III point 1(a) systems — remote biometric identification. For these systems, no action or decision may be taken on the basis of an identification result unless that result has been separately verified and confirmed by at least two natural persons with the necessary competence, training, and authority. The only exception is for law enforcement, migration, border control, or asylum contexts where Union or national law considers this requirement disproportionate.

Related guides

Risk Management System

Art 9 — iterative process, foreseeable misuse, residual risk

AI Deployer Obligations

Full Art 26 obligations checklist for businesses using high-risk AI

Conformity Assessment Guide

Art 43 — when self-assessment vs notified body is required

Biometric AI Compliance

Two-person rule in context — Annex III HR-1 obligations

AI Provider Obligations

Complete checklist: Arts 9–21 for high-risk AI developers

EU AI Act Fines & Penalties

Full breakdown of all four penalty tiers under Art 99

Find out exactly what human oversight your system requires

Regumatrix checks your system against Annex III and every relevant article of the EU AI Act and returns: your risk tier, the specific obligations that apply (including Art 14 and Art 26(2)), your fine exposure under Art 99(4), and an 8-section cited compliance report. No credit card. Results in about 30 seconds.

Start free — 3 analyses included No credit card · Results in ~30 seconds