For operators of critical infrastructure — energy grids, water systems, transport networks, digital infrastructure — AI systems used in safety roles are simultaneously subject to both the EU AI Act and the NIS2 Directive. AI Act Article 15 cybersecurity requirements and NIS2 Art 21 security measures pursue the same systems from different angles. COM(2025) 837 provides partial relief: a single-entry ENISA notification platform for NIS2, GDPR, DORA, eIDAS and CER incidents — though AI Act Article 73 serious incident reporting remains a separate obligation.
NIS2 Directive
Directive (EU) 2022/2555 — transposition deadline: October 2024
EU AI Act
Regulation (EU) 2024/1689 — most high-risk obligations from August 2026
| Aspect | NIS2 | EU AI Act |
|---|---|---|
| What it regulates | Security of network and information systems — for entities in sectors critical to society and economy | AI systems — based on risk to health, safety, and fundamental rights of persons |
| Who it applies to | Essential entities (energy, transport, banking, financial market infrastructure, health, digital infrastructure, public administration, space) + important entities (digital providers, manufacturing, postal, food) | Providers (develop/place AI on market), deployers (use AI in professional context), importers, distributors — regardless of sector |
| Cybersecurity obligation | Art 21: proportionate technical/organisational measures — risk analysis, incident handling, supply chain security, encryption, access control, security testing | Art 15: high-risk AI must achieve accuracy, robustness and cybersecurity; resilient against data poisoning, model poisoning, adversarial examples, confidentiality attacks throughout lifecycle |
| Incident reporting | Art 23: significant incidents → 24h early warning → 72h assessment → 30-day final report to national NIS2 competent authority | Art 73: serious incidents (death, serious harm, significant property/rights breach) → provider notifies national market surveillance authority without undue delay |
| Supply chain / third parties | Art 21(2)(d): essential entities must address security in supply chain including ICT service providers and product manufacturers | Arts 24–26: clear obligation allocation between providers and downstream parties; deployers entitled to technical documentation, instructions for use, conformity declarations |
| Enforcement authority | National NIS2 competent authorities (sector-specific); ENISA for coordination; cross-border under NIS Cooperation Group | National market surveillance authorities; EU AI Office for GPAI models and (under 836) VLOP-integrated AI |
| Penalty maximum | Essential entities: €10M or 2% of global annual turnover (whichever higher); important entities: €7M or 1.4% | €35M/7% for prohibited AI; €15M/3% for high-risk obligation violations; €7.5M/1.5% for incorrect information |
| Applicability date | National transposition deadline: October 2024 (Directive — Member States implement) | February 2025 (prohibitions); August 2026 (most high-risk obligations); August 2027 (some deployer-side obligations) |
Critical infrastructure AI — Annex III §2 ↔ NIS2 essential entities
AI Act Annex III §2 designates as high-risk any AI system used as a safety component in the management and operation of critical digital infrastructure, road traffic management, water supply, gas, heating, or electricity. The organisations operating these systems — water utilities, energy providers, transport operators, internet exchange points — are exactly the NIS2 essential entities required to meet NIS2 Art 21 cybersecurity measures and Art 23 incident reporting. The same AI system deployments trigger high-risk AI Act obligations AND must be covered by the entity's NIS2 ICT risk management framework. Both apply independently.
Cybersecurity requirements — AI Act Art 15 ↔ NIS2 Art 21
AI Act Art 15 is an AI system-level cybersecurity requirement applied to providers of high-risk AI: accuracy, robustness, and resilience against adversarial attacks (data poisoning, model poisoning, adversarial examples). NIS2 Art 21 is an entity-level ICT risk management obligation applied to essential and important entities: supply chain security, ICT procurement security, encryption. These are complementary, not duplicative. The AI Act's Art 15 obligations ensure the AI component is secure; NIS2 Art 21 ensures the wider network and information environment in which the AI operates is secure. An AI system with EU Cybersecurity Act certification is presumed to comply with AI Act Art 15 to the extent covered — which can help NIS2 entities demonstrate their ICT procurement applied appropriate security standards.
Incident reporting — Art 73 ↔ NIS2 Art 23
These operate in parallel, applying to different parties for the same incident. AI Act Art 73: the AI system provider must report a serious incident to the national market surveillance authority. NIS2 Art 23: the essential/important entity (often the deployer) must report a significant incident to the NIS2 competent authority within 24h, 72h, and 30 days. If a critical infrastructure operator's AI system malfunctions and causes harm, the AI provider reports under Art 73 and the infrastructure operator reports under NIS2 Art 23. These are not the same report — they go to different authorities and describe different aspects of the same event.
Supply chain security — AI Act Arts 24–26 ↔ NIS2 Art 21(2)(d)
NIS2 essential entities must assess the security practices of their ICT suppliers — including AI system providers. AI Act provider/deployer obligations create a documentation chain: providers must supply technical documentation, conformity declarations, and instructions for use (Art 26(6) deployer right). NIS2 entities can leverage this documentation when conducting ICT supply chain due diligence. In practice, an AI system provider to a NIS2 essential entity is simultaneously an AI Act provider (obligated to provide documentation under Art 11 / Art 26) and an ICT third-party service provider (subject to NIS2 supply chain risk assessment by the entity).
| Obligation | Who reports | To whom | Timeline |
|---|---|---|---|
| NIS2 Art 23 | The essential/important entity (typical deployer) | National NIS2 competent authority | 24h / 72h / 30 days |
| AI Act Art 73 | AI system provider | National MSA (market surveillance authority) | Without undue delay |
| 837 single-entry (when operational) | Essential/important entity | ENISA platform → distributed to NIS2 + GDPR + DORA + eIDAS + CER authorities | Via ENISA platform; same underlying timelines apply |
| AI Act Art 73 after 837 | AI system provider (unchanged) | National MSA — NOT covered by ENISA platform | Unchanged — separate obligation |
Are you a NIS2 essential entity deploying AI in critical infrastructure?
Regumatrix maps your AI systems against Annex III §2, identifies dual obligations under AI Act and NIS2, and generates a combined compliance checklist cross-referencing Art 15 cybersecurity and NIS2 Art 21 measures.
Check my AI Act + NIS2 obligations — 3 free analysesToday, an entity subject to multiple notification obligations must file separate reports to separate authorities for the same incident. 837 proposes to end this by creating one platform covering all covered acts.
New NIS2 Art 23a — ENISA single-entry point
NIS2 Art 23(1) and 30(1) amendments
What the single-entry point does NOT cover
The single-entry point covers NIS2, GDPR, DORA, eIDAS, and CER. The EU AI Act Art 73 serious incident reporting is not included — it remains a separate obligation to the national market surveillance authority. For a NIS2 essential entity that is also the AI system provider, an AI malfunction may require: (1) a NIS2 Art 23 report via ENISA single-entry point, and (2) a separate AI Act Art 73 report to the national MSA. Monitoring for potential extension of the single-entry point to AI Act Art 73 is recommended.
Confirm NIS2 essential / important entity status
Verify your sector maps to NIS2 Annex I (essential) or Annex II (important). Energy, water, transport, and digital infrastructure operators are essential entities. This triggers enhanced NIS2 obligations including Art 21 security measures, Art 23 incident reporting, and board accountability.
Identify Annex III §2 AI systems
Audit AI deployments used as safety components in management and operation of critical digital infrastructure, road traffic, water, gas, heating, or electricity networks. These are automatically high-risk under the AI Act and require conformity assessment, CE marking, technical documentation, and human oversight.
Map Art 15 against Art 21 cybersecurity measures
Compare AI Act Art 15 requirements (data poisoning resilience, model robustness, adversarial attack resistance) against your NIS2 Art 21 ICT risk management framework. Identify where a single security measure addresses both. Consider EU Cybersecurity Act certification — certified AI systems are presumed compliant with Art 15 to the extent covered.
Establish dual incident reporting procedures
Prepare separate incident report procedures: NIS2 Art 23 (entity → NIS2 authority: 24h/72h/30d) and AI Act Art 73 (provider → national MSA). If you are both the deployer and provider, both obligations apply. When 837's single-entry point becomes operational, consolidate NIS2/GDPR/DORA notifications there — AI Act Art 73 stays separate.
Apply NIS2 supply chain due diligence to AI providers
For each AI system provider whose product is used in NIS2-critical operations, conduct NIS2 Art 21(2)(d) supply chain risk assessment. Request AI Act technical documentation (Art 11), conformity declaration (Art 47), and instructions for use (Art 26(6)) as part of procurement due diligence.
Yes, and this is common. NIS2 applies to essential entities (energy, transport, banking, health, digital infrastructure) and important entities (digital providers, manufacturing, food, postal). If any of those organisations use AI systems that qualify as high-risk under Annex III of the AI Act — for example, AI systems used to manage critical digital infrastructure or road traffic (Annex III §2) — they face dual obligations: AI Act Chapter III requirements (risk management, conformity assessment, technical documentation, human oversight) and NIS2 requirements (ICT risk management framework, incident reporting, supply chain security). Both apply independently and simultaneously.
Yes, significantly. AI Act Article 15 requires that high-risk AI systems achieve appropriate levels of accuracy, robustness, and cybersecurity — including resilience against data poisoning, model poisoning, adversarial examples, and confidentiality attacks. NIS2 Article 21 requires essential and important entities to take risk-proportionate technical and organisational measures addressing supply chain security, ICT security in procurement, and security of network and information systems. For a NIS2 essential entity that deploys high-risk AI, both sets of cybersecurity obligations apply to the same system. The EU Cybersecurity Act provides a helpful bridge: AI systems certified under a Cybersecurity Act scheme are presumed compliant with AI Act Art 15 to the extent the certification covers them.
Both can apply, but to different parties. AI Act Art 73 requires the AI system provider to report serious incidents (causing or likely to cause death, serious harm, significant property damage, or fundamental rights breach) to the national market surveillance authority. NIS2 Art 23 requires the essential or important entity deploying the system to report significant incidents to the NIS2 competent authority: 24-hour early warning, 72-hour assessment, and 30-day final report. If the deploying entity is also the AI system provider, it must file both types of reports to two different authorities.
COM(2025) 837 proposes to insert Article 23a into NIS2, establishing an ENISA-maintained single-entry point for incident notifications. When this platform is operational, entities subject to multiple reporting obligations — NIS2, GDPR, DORA, eIDAS, CER — can submit a single notification covering all applicable obligations for the same incident. ENISA pilots the system within 18 months of the 837 Regulation entering into force, with a Commission OJ notice when it becomes operational. Notably, AI Act Art 73 incident reporting is not included in the single-entry point — AI Act serious incident notifications remain a separate obligation to the national market surveillance authority.
In practice, yes. NIS2 Article 21(2)(d) requires essential and important entities to address supply chain security — including the security practices of their ICT service and product suppliers. An AI system provider whose product controls industrial processes, manages digital infrastructure, or supports healthcare services for a NIS2-essential entity qualifies as an ICT service provider whose practices fall within the NIS2 supply chain security requirement. AI Act Arts 24–26 (provider/deployer obligations) create a parallel information-sharing chain. Deployers can leverage AI Act technical documentation and instructions for use when discharging NIS2 supply chain due diligence on AI system providers.
High-Risk AI Systems — Chapter III
Annex III §2 — critical infrastructure AI triggers full high-risk obligations.
EU AI Act vs GDPR
Critical infrastructure AI may process personal data — GDPR and AI Act overlap.
EU AI Act vs DORA
Banks and financial infrastructure face dual DORA + AI Act obligations — also covered by 837 single-entry point.
AI Act Market Surveillance & Enforcement
AI Act Art 73 serious incident reporting to national MSA — not covered by ENISA platform.
Digital Omnibus 837 — Full Overview
Complete overview of COM(2025) 837 including the ENISA single-entry notification platform.
Data Breach Notification Changes (837)
NIS2 + GDPR notifications via a single platform — benefits for critical infrastructure operators.
Regumatrix identifies every AI system in your critical infrastructure operations, maps them against Annex III §2, and generates a combined AI Act + NIS2 compliance checklist covering cybersecurity, incident reporting, and supply chain security.
Start free analysis