AI systems used as safety components in the management and operation of critical digital infrastructure, road traffic, or in the supply of water, gas, heating, or electricity are high-risk under Annex III §2 of the EU AI Act. Full Chapter III obligations apply — but self-assessment is available without a mandatory notified body. COM(2025) 836 proposes extending the deadline to 2 December 2027.
High-risk classification: full Chapter III obligations from 2 August 2026
Violations of high-risk AI system obligations under Chapter III carry the Art 99(4) penalty: up to €15 million or 3% of global annual turnover, whichever is higher. Energy companies, water utilities, traffic management authorities, and their AI suppliers are all within scope.
Is your infrastructure AI a safety component under Annex III §2?
Regumatrix analyses your system description against the exact §2 scope text and the Art 6(3) narrow exception criteria, identifies whether the safety component threshold is met, and maps your full obligation chain in a cited 8-section report — in about 30 seconds.
Check in 30 seconds — 3 free analysesAnnex III §2 — HIGH-RISK
AI systems intended to be used as safety components in the management and operation of critical digital infrastructure, road traffic, or in the supply of water, gas, heating or electricity.
In scope — typical examples
Outside §2 scope
Art 6(3) narrow exception — not automatically high-risk
Under Art 6(3), an Annex III system is NOT high-risk if it does not pose a significant risk of harm and falls into one of the narrow categories: (a) narrow procedural task only; (b) improving results of a previously completed human activity; (c) detecting decision-making patterns without replacing human assessment; or (d) preparatory task to a human assessment. Note: systems that perform profiling of natural persons are always high-risk. If you rely on Art 6(3), document the assessment under Art 6(4) and register under Art 49(2).
Providers of Annex III §2 AI systems — typically industrial software vendors, OT/ICS platform providers, and traffic management system developers — must satisfy all obligations below before market placement.
Risk Management System
Establish, implement, document, and maintain a risk management system covering the entire lifecycle. For infrastructure AI, this must address asymmetric failure modes: AI failures in critical infrastructure may have cascading effects across interconnected systems. Identify foreseeable misuse scenarios (e.g., adversarial manipulation of traffic signals or SCADA systems) and document mitigations.
Data Governance
Training and test data must be representative of the operational environment — including seasonal, geographic, and demand variations in energy or water networks. Historical infrastructure data may reflect legacy system constraints that no longer apply. Data governance must ensure training data does not encode unsafe operational assumptions.
Technical Documentation (Annex IV)
Full Annex IV technical documentation: system description, design, intended purpose, architecture, data used, validation methodology, known limitations, and post-market monitoring plan. For infrastructure AI integrated into larger operational technology stacks, document the interfaces clearly.
Record-Keeping & Logging
Automatic logging for the duration of the system's placing on the market period. For safety-critical infrastructure AI, logs must capture inputs received, decisions made or recommended, and any human override events. Logs may be reviewed by national market surveillance authorities.
Transparency & Instructions for Use
Instructions for use must be provided to deployers (operators of infrastructure). These must describe operating conditions, performance limits, known failure modes, and the human oversight procedures required. For complex SCADA integrations, provide integration guidance for the deployer's operators.
Human Oversight
Particularly critical for infrastructure AI. The system must allow trained operators to understand outputs, detect anomalies, and override AI-driven control actions. Automatic failsafe mechanisms must be documented. Operators must be able to intervene manually at any time. This requirement has significant UX and interface design implications for control room systems.
Accuracy, Robustness & Cybersecurity
Infrastructure AI is a high-value target for adversarial attacks. Mandatory cybersecurity protections include input validation against adversarial manipulation, resistance to data poisoning in online-learning systems, and resilience under unexpected operational conditions. Accuracy requirements must be validated under edge conditions representative of real grid, traffic, or utility scenarios.
Art 43(2) provides the rule: for AI systems in Annex III points 2 to 8, providers must follow the conformity assessment procedure based on internal control as referred to in Annex VI. This procedure does not require a notified body. Unlike biometric AI (Annex III §1), critical infrastructure AI may be self-assessed.
Self-assessment route (Annex VI internal control):
NIS2 Directive cross-reference
Infrastructure operators within the scope of the NIS2 Directive (Directive (EU) 2022/2555) also have cybersecurity risk management obligations that overlap with Art 15 of the EU AI Act. The Art 15 cybersecurity requirements for the AI system exist in parallel to — and do not replace — NIS2 obligations on the operator. Coordinate compliance programmes where both apply.
Deploy within the intended purpose
Operate the AI strictly within the conditions and use case defined by the provider's instructions for use. Do not use a traffic management AI for a water network application, or operate outside validated operating conditions.
Assign human oversight roles
Designate trained operators who can monitor AI outputs, detect anomalies, and intervene. For critical infrastructure, this is operationalised as control room protocols and shift handover procedures that include AI monitoring tasks.
Conduct FRIA if public body deployer
Public authorities deploying high-risk AI must complete a Fundamental Rights Impact Assessment under Art 27(1). Grid operators or traffic authorities that are public or quasi-public bodies are in scope.
Monitor for anomalous outputs and report incidents
Continuously monitor AI operation. If the system produces outputs that could threaten infrastructure safety — e.g., recommendations that would destabilise grid frequency or contaminate a water supply — suspend use immediately and notify the provider and market surveillance authority under Art 26(5).
Retain access logs
Retain automatically generated logs for at least 6 months. These are the audit trail for regulatory inspections and incident investigations.
The Omnibus Simplification proposal COM(2025) 836 adds a new point (d) to Article 113 that delays the substantive high-risk AI obligations for all Annex III systems — including §2 critical infrastructure AI.
Proposed mechanism
Chapter III Sections 1, 2, and 3 shall apply 6 months after a Commission decision confirming that adequate compliance support measures (harmonised standards, common specifications, guidelines) are available for Annex III systems. In the absence of that decision within the required timeframe — or where the resulting date is later — the obligations apply from 2 December 2027 for systems classified as high-risk under Art 6(2) and Annex III.
Current law deadline
2 August 2026 (4 months away)
General AI Act application date
Proposed fallback deadline
2 December 2027 (20 months away)
COM(2025) 836 — pending agreement
Common grey areas in critical infrastructure AI classification
Regumatrix maps your system against Annex III §2 and Art 6(3), identifies whether self-assessment or an alternative route applies, and produces a cited compliance report — in about 30 seconds.
Start free — 3 analyses included