RegumatrixBeta
GuidesPathfinderAI RightsFreeAbout
Sign inGet Started Free

Reference

  • All Articles
  • Official Text ↗

Compliance Guides

  • Compliance Timeline
  • High-Risk Checklist
  • Healthcare AI
  • HR & Recruitment
  • Financial Services
  • GPAI / Foundation Models
  • View all guides →

Product

  • Risk Pathfinder
  • AI Rights Check
  • Get Started Free
  • About
  • Feedback
  • Contact

Legal

  • Privacy Policy
  • Terms

Regumatrix — AI compliance powered by Regulation (EU) 2024/1689

This tool is informational only and does not constitute legal advice.

Grounded in Regulation (EU) 2024/1689 · verified 4 Apr 2026
HomeComplianceCritical Infrastructure AI
Annex III §2 — Critical Infrastructure€15M / 3% high-risk penaltySelf-assessment available2 Aug 2026 — 4 months away

EU AI Act for Critical Infrastructure AI (Annex III §2)

AI systems used as safety components in the management and operation of critical digital infrastructure, road traffic, or in the supply of water, gas, heating, or electricity are high-risk under Annex III §2 of the EU AI Act. Full Chapter III obligations apply — but self-assessment is available without a mandatory notified body. COM(2025) 836 proposes extending the deadline to 2 December 2027.

High-risk classification: full Chapter III obligations from 2 August 2026

Violations of high-risk AI system obligations under Chapter III carry the Art 99(4) penalty: up to €15 million or 3% of global annual turnover, whichever is higher. Energy companies, water utilities, traffic management authorities, and their AI suppliers are all within scope.

Is your infrastructure AI a safety component under Annex III §2?

Regumatrix analyses your system description against the exact §2 scope text and the Art 6(3) narrow exception criteria, identifies whether the safety component threshold is met, and maps your full obligation chain in a cited 8-section report — in about 30 seconds.

Check in 30 seconds — 3 free analyses

What is in scope: Annex III §2 — Critical Infrastructure

Annex III §2 — HIGH-RISK

AI systems intended to be used as safety components in the management and operation of critical digital infrastructure, road traffic, or in the supply of water, gas, heating or electricity.

In scope — typical examples

  • • AI controlling load distribution or fault isolation in an electricity grid
  • • AI for SCADA systems managing gas network pressure and valve control
  • • AI that manages road traffic signal timing to prevent accidents
  • • AI for water treatment process control (filtration, disinfection)
  • • AI in district heating network management affecting supply continuity
  • • AI embedded in critical data centre infrastructure management affecting availability

Outside §2 scope

  • • AI used only for analytics dashboards or demand forecasting (no direct safety function)
  • • AI for customer billing or metering administration
  • • AI for predictive maintenance scheduling (not direct control)
  • • AI in administrative back-office functions of utility companies
  • • Navigation apps for individual users (not managing road traffic infrastructure)

Art 6(3) narrow exception — not automatically high-risk

Under Art 6(3), an Annex III system is NOT high-risk if it does not pose a significant risk of harm and falls into one of the narrow categories: (a) narrow procedural task only; (b) improving results of a previously completed human activity; (c) detecting decision-making patterns without replacing human assessment; or (d) preparatory task to a human assessment. Note: systems that perform profiling of natural persons are always high-risk. If you rely on Art 6(3), document the assessment under Art 6(4) and register under Art 49(2).

Full high-risk obligation chain (provider perspective)

Providers of Annex III §2 AI systems — typically industrial software vendors, OT/ICS platform providers, and traffic management system developers — must satisfy all obligations below before market placement.

Art 9

Risk Management System

Establish, implement, document, and maintain a risk management system covering the entire lifecycle. For infrastructure AI, this must address asymmetric failure modes: AI failures in critical infrastructure may have cascading effects across interconnected systems. Identify foreseeable misuse scenarios (e.g., adversarial manipulation of traffic signals or SCADA systems) and document mitigations.

Art 10

Data Governance

Training and test data must be representative of the operational environment — including seasonal, geographic, and demand variations in energy or water networks. Historical infrastructure data may reflect legacy system constraints that no longer apply. Data governance must ensure training data does not encode unsafe operational assumptions.

Art 11

Technical Documentation (Annex IV)

Full Annex IV technical documentation: system description, design, intended purpose, architecture, data used, validation methodology, known limitations, and post-market monitoring plan. For infrastructure AI integrated into larger operational technology stacks, document the interfaces clearly.

Art 12

Record-Keeping & Logging

Automatic logging for the duration of the system's placing on the market period. For safety-critical infrastructure AI, logs must capture inputs received, decisions made or recommended, and any human override events. Logs may be reviewed by national market surveillance authorities.

Art 13

Transparency & Instructions for Use

Instructions for use must be provided to deployers (operators of infrastructure). These must describe operating conditions, performance limits, known failure modes, and the human oversight procedures required. For complex SCADA integrations, provide integration guidance for the deployer's operators.

Art 14

Human Oversight

Particularly critical for infrastructure AI. The system must allow trained operators to understand outputs, detect anomalies, and override AI-driven control actions. Automatic failsafe mechanisms must be documented. Operators must be able to intervene manually at any time. This requirement has significant UX and interface design implications for control room systems.

Art 15

Accuracy, Robustness & Cybersecurity

Infrastructure AI is a high-value target for adversarial attacks. Mandatory cybersecurity protections include input validation against adversarial manipulation, resistance to data poisoning in online-learning systems, and resilience under unexpected operational conditions. Accuracy requirements must be validated under edge conditions representative of real grid, traffic, or utility scenarios.

Conformity assessment: self-assessment (Annex VI, no notified body)

Art 43(2) provides the rule: for AI systems in Annex III points 2 to 8, providers must follow the conformity assessment procedure based on internal control as referred to in Annex VI. This procedure does not require a notified body. Unlike biometric AI (Annex III §1), critical infrastructure AI may be self-assessed.

Self-assessment route (Annex VI internal control):

  1. Apply harmonised standards where available (CEN/CENELEC AI standards for industrial/critical systems)
  2. Prepare Annex IV technical documentation
  3. Conduct internal control assessment against all Chapter III Section 2 requirements
  4. Draw up EU Declaration of Conformity (Art 47)
  5. Affix CE marking (Art 48)
  6. Register in the EU AI database (Art 49)

Read the full conformity assessment guide →

NIS2 Directive cross-reference

Infrastructure operators within the scope of the NIS2 Directive (Directive (EU) 2022/2555) also have cybersecurity risk management obligations that overlap with Art 15 of the EU AI Act. The Art 15 cybersecurity requirements for the AI system exist in parallel to — and do not replace — NIS2 obligations on the operator. Coordinate compliance programmes where both apply.

Deployer obligations (infrastructure operators)

Deploy within the intended purpose

Operate the AI strictly within the conditions and use case defined by the provider's instructions for use. Do not use a traffic management AI for a water network application, or operate outside validated operating conditions.

Art 26(1)

Assign human oversight roles

Designate trained operators who can monitor AI outputs, detect anomalies, and intervene. For critical infrastructure, this is operationalised as control room protocols and shift handover procedures that include AI monitoring tasks.

Art 26(2)

Conduct FRIA if public body deployer

Public authorities deploying high-risk AI must complete a Fundamental Rights Impact Assessment under Art 27(1). Grid operators or traffic authorities that are public or quasi-public bodies are in scope.

Art 27

Monitor for anomalous outputs and report incidents

Continuously monitor AI operation. If the system produces outputs that could threaten infrastructure safety — e.g., recommendations that would destabilise grid frequency or contaminate a water supply — suspend use immediately and notify the provider and market surveillance authority under Art 26(5).

Art 26(5)

Retain access logs

Retain automatically generated logs for at least 6 months. These are the audit trail for regulatory inspections and incident investigations.

Art 26(6)
PROPOSAL — not yet enacted law

COM(2025) 836: Deadline extended to 2 December 2027 (Annex III systems)

The Omnibus Simplification proposal COM(2025) 836 adds a new point (d) to Article 113 that delays the substantive high-risk AI obligations for all Annex III systems — including §2 critical infrastructure AI.

Proposed mechanism

Chapter III Sections 1, 2, and 3 shall apply 6 months after a Commission decision confirming that adequate compliance support measures (harmonised standards, common specifications, guidelines) are available for Annex III systems. In the absence of that decision within the required timeframe — or where the resulting date is later — the obligations apply from 2 December 2027 for systems classified as high-risk under Art 6(2) and Annex III.

Current law deadline

2 August 2026 (4 months away)

General AI Act application date

Proposed fallback deadline

2 December 2027 (20 months away)

COM(2025) 836 — pending agreement

Common grey areas in critical infrastructure AI classification

  • • AI that generates recommendations for human operators (without direct automated control of physical processes) — may still be a safety component if recommendations are typically acted on without independent review
  • • AI for predictive maintenance — normally outside scope, but in scope if maintenance deferrals could cause a safety-critical failure
  • • AI used in cybersecurity monitoring of critical infrastructure networks — depends on whether output can directly affect operational safety
  • • AI integrated into national emergency response systems that happen to manage traffic — check Annex III §5(d) (emergency dispatch) as an alternative
Verify your classification — free

Frequently asked questions

Which critical infrastructure AI systems are high-risk under Annex III §2?
Annex III §2 covers AI systems intended to be used as safety components in the management and operation of: (1) critical digital infrastructure (e.g., networks, cloud infrastructure, data centres); (2) road traffic management; and (3) the supply of water, gas, heating or electricity. The key qualifier is 'safety component' — the AI must have a function whose failure could endanger the operation of the infrastructure or the safety of persons. AI used purely for optimisation, billing, or administrative functions within these sectors is not covered by §2.
What does 'safety component' mean in the critical infrastructure context?
A safety component is an AI system component whose failure or malfunction could endanger the safety of persons or the uninterrupted operation of infrastructure critical to public welfare. Examples include: AI that controls load balancing in an electricity grid (failure risks blackout); AI for water treatment process control (failure risks contamination); AI for road traffic signal management (failure risks collisions); SCADA AI for gas network pressure management (failure risks explosion). AI that only generates reports, provides analytics dashboards, or performs predictive maintenance scheduling, without having direct control over physical safety-critical processes, is less likely to qualify.
Is a notified body required for critical infrastructure AI?
No. Article 43(2) of the EU AI Act provides that for AI systems referred to in points 2 to 8 of Annex III — which includes §2 critical infrastructure AI — providers must follow the conformity assessment procedure based on internal control as referred to in Annex VI. This procedure does not involve a notified body. Self-assessment is the available route for critical infrastructure AI.
When must critical infrastructure AI comply with the EU AI Act?
Under the current EU AI Act (Art 113), the general application date is 2 August 2026 — meaning Annex III §2 critical infrastructure AI must comply from that date. COM(2025) 836 proposes a new Art 113 point (d) that delays the substantive high-risk AI obligations (Chapter III Sections 1–3) for Annex III systems: 6 months after a Commission decision confirming adequate compliance support, or — if that decision is not adopted in time — a fallback of 2 December 2027. This is a proposal and has not yet been enacted.
Does the EU AI Act apply to AI used in industrial control systems (SCADA/ICS)?
It can, if the SCADA or ICS AI has a safety function in one of the Annex III §2 categories (critical digital infrastructure, road traffic, water/gas/heating/electricity supply). An AI module embedded in a SCADA system that autonomously controls pressure valves in a gas network, or manages switching in a high-voltage power grid, is a strong candidate for high-risk classification under §2. Providers of such AI should assess whether the 'safety component' threshold is met and, if so, comply with full Chapter III obligations under the Art 43(2) self-assessment route.

Related compliance guides

Is My AI High-Risk? (Art 6 Checklist)Conformity Assessment (Art 43 — Self-Assessment Route)Risk Management System (Art 9 — Step-by-Step)Human Oversight Requirements (Art 14)COM(2025) 836 — Deadline Changes OverviewEU AI Act vs. NIS2 Directive — Overlap & Interaction

Check your infrastructure AI in 30 seconds

Regumatrix maps your system against Annex III §2 and Art 6(3), identifies whether self-assessment or an alternative route applies, and produces a cited compliance report — in about 30 seconds.

Start free — 3 analyses included