RegumatrixBeta
GuidesPathfinderAI RightsFreeAbout
Sign inGet Started Free

Reference

  • All Articles
  • Official Text ↗

Compliance Guides

  • Compliance Timeline
  • High-Risk Checklist
  • Healthcare AI
  • HR & Recruitment
  • Financial Services
  • GPAI / Foundation Models
  • View all guides →

Product

  • Risk Pathfinder
  • AI Rights Check
  • Get Started Free
  • About
  • Feedback
  • Contact

Legal

  • Privacy Policy
  • Terms

Regumatrix — AI compliance powered by Regulation (EU) 2024/1689

This tool is informational only and does not constitute legal advice.

Grounded in Regulation (EU) 2024/1689 · verified 4 Apr 2026
Home/Compliance/Post-Market Monitoring
Provider ObligationHigh-Risk AI Providers

Post-Market Monitoring & Serious Incident Reporting (Articles 72–73)

Once a high-risk AI system is on the market, the provider's obligations don't end. Articles 72 and 73 require an active monitoring system throughout the system's lifetime and legally binding timelines for reporting serious incidents to national authorities.

What's at stake

Post-market monitoring is a component of the Quality Management System under Art 17(1)(h) and Art 17(1)(i), which is itself a mandatory obligation under Art 16(c). Failure to maintain a compliant PMM system or to report a serious incident on time is a violation of Article 16 provider obligations — subject to fines under Art 99(4)(a) of up to €15 million or 3% of global annual turnover, whichever is higher (lower cap for SMEs).

Not sure if your monitoring process meets Articles 72-73?

Regumatrix analyses your post-deployment monitoring approach against the Article 72 and 73 requirements and identifies gaps — including incident classification, reporting timelines, and PMM plan completeness.

Analyse my system

Article 72: Post-market monitoring system

Art 72(1) — Establish and document a PMM system

Providers must establish and document a post-market monitoring system proportionate to the nature of the AI technologies and the risks of the high-risk AI system. The proportionality requirement means higher-risk systems require more robust monitoring architecture.

Art 72(2) — Actively collect and analyse performance data

The PMM system must actively and systematically collect, document, and analyse relevant data on the performance of high-risk AI systems throughout their lifetime. Data may come from deployers or other sources. Where relevant, monitoring must include analysis of interaction with other AI systems.

Law enforcement carve-out: The obligation does not cover sensitive operational data of deployers that are law-enforcement authorities (Art 72(2) final sentence).

Art 72(3) — PMM plan (part of technical documentation)

The PMM system must be based on a written post-market monitoring plan. This plan is part of the technical documentation required under Annex IV.

Under current Art 72(3), the Commission was required to adopt an implementing act establishing a detailed template for the PMM plan by 2 February 2026. See the 836 proposal box below for a proposed change to this deadline.

Art 72(4) — Integration with existing sectoral systems

For high-risk AI systems covered by Union harmonisation legislation (Annex I, Section A), providers may integrate the Art 72 requirements into existing post-market surveillance systems under that legislation — provided it achieves an equivalent level of protection. The same applies to high-risk AI systems in Annex III §5 (finance/employment) placed by financial institutions that already have internal governance requirements under Union financial services law.

What is a "serious incident"? (Art 3(49))

A serious incident is defined as an incident or malfunction of an AI system that directly or indirectly leads to any of the following:

(a) Death or serious health harm

The death of a person, or serious harm to a person's health, caused directly or indirectly by the AI system.

(b) Critical infrastructure disruption

A serious and irreversible disruption of the management or operation of critical infrastructure.

(c) Fundamental rights violation

The infringement of obligations under Union law intended to protect fundamental rights.

(d) Property or environmental harm

Serious harm to property or the environment.

Article 73: Reporting timelines

All serious incidents must be reported to the market surveillance authority of the Member State where the incident occurred (Art 73(1)). The deadlines vary by severity:

Standard reporting

15 days — Art 73(2)

From when the provider (or deployer) becomes aware of the serious incident AND has established a causal link, or reasonable likelihood of one, with the AI system. Both conditions must be met — awareness alone does not start the clock if the connection to the AI system is not yet established.

Widespread infringement or critical infrastructure (Art 3(49)(b))

2 days — Art 73(3)

Immediately upon awareness; not later than two days after the provider or deployer becomes aware of the incident.

Death of a person

10 days — Art 73(4)

Report immediately after establishing (or suspecting) a causal relationship; not later than 10 days from awareness. The obligation applies as soon as a causal connection is suspected — not only confirmed.

Incomplete reports allowed (Art 73(5))

Where timely reporting requires it, an initial incomplete report may be submitted to meet the deadline. A complete report must follow as soon as possible without undue delay.

Post-incident investigation obligation (Art 73(6))

Following any serious incident report, the provider must without delay:

  • ✓Perform necessary investigations in relation to the incident and the AI system concerned.
  • ✓Conduct a risk assessment of the incident.
  • ✓Take corrective action to address identified causes.
  • ✓Cooperate with competent authorities and, where relevant, the notified body that issued the system's certificate.
  • ✓Not alter the AI system in a way that could affect subsequent evaluation of causes, prior to informing competent authorities of the intended action.

Sectoral carve-outs for incident reporting

Art 73(9) — Systems under equivalent Union law

For high-risk AI systems placed on the market by providers already subject to Union legislative instruments with equivalent reporting obligations, notification is limited to incidents defined in Art 3(49)(c) only — fundamental rights violations. Other serious incident categories are handled under the sectoral law.

Art 73(10) — Medical devices (MDR / IVDR)

For high-risk AI systems that are safety components of medical devices covered by Regulation (EU) 2017/745 or 2017/746, serious incident notification is limited to Art 3(49)(c) cases and must be made to the national competent authority designated for that purpose by the Member State where the incident occurred.

The deployer's role

Deployers are not passive recipients. Under Art 26(5), deployers must monitor the operation of the high-risk AI system and report observations, anomalies, or near-incidents to the provider. The provider's PMM system relies on deployer-sourced data (Art 72(2)). Deployers are also explicitly included in Art 73 where they become aware of a serious incident — the reporting obligation falls on "the provider or, where applicable, the deployer."

Practical implication

If a deployer becomes aware of a serious incident before the provider does, the deployer should notify the provider immediately and — if the provider is unable or unresponsive — may need to notify the relevant market surveillance authority directly to comply within the applicable deadline.

PROPOSAL — not yet enacted lawCOM(2025) 836 — Digital Omnibus

836 change: Mandatory PMM template removed — guidance only

What changes: Current Art 72(3) requires the Commission to adopt an implementing act establishing a mandatory harmonised template for the post-market monitoring plan by 2 February 2026. COM(2025) 836 would replace this with a simpler obligation: the Commission shall publish guidance on the PMM plan — removing the implementing-act requirement entirely.

Why: The Digital Omnibus recitals explain that mandatory implementing acts for harmonised conditions should only be adopted where strictly necessary. Removing the PMM template requirement gives providers more flexibility in how they structure their monitoring plans, while still requiring that a plan exists and forms part of the technical documentation. The Commission is still required to provide guidance to support implementation.

The core Art 72(1) and (2) obligations — establish a system, actively collect and analyse performance data — are not changed by 836. The amendment is limited to the template/implementing act for the PMM plan itself.

No changes are proposed under COM(2025) 837 for post-market monitoring or serious incident reporting.

Common post-market monitoring failures

  • ⚠Treating post-market monitoring as a passive log review rather than an active, systematic data collection and analysis process.
  • ⚠Failing to document the PMM plan as part of the technical documentation before market placement — an Art 72(3) requirement, not an optional later step.
  • ⚠Not establishing a clear feedback channel with deployers to collect performance data (Art 72(2) requires data from deployers).
  • ⚠Missing the 2-day reporting window for critical infrastructure incidents by routing all incidents through a standard 15-day process.
  • ⚠Altering or patching the AI system after a serious incident before informing competent authorities — explicitly prohibited by Art 73(6).
  • ⚠Failing to recognise that a deployer who becomes aware of a death linked to the system before the provider must act on the 10-day timeline themselves.

Regumatrix assesses your post-market monitoring system for these failure points and produces a cited compliance gap report. Try it on your system.

Frequently asked questions

Does the deployer have any role in post-market monitoring?
Yes. Under Article 26(5), deployers of high-risk AI systems must monitor the operation of their systems and report relevant information to the provider — including any observations that could affect ongoing compliance or indicate anomalous behaviour. Article 73 also explicitly covers deployers in the incident reporting obligation, stating reports may be made by 'the provider or, where applicable, the deployer' where the deployer becomes aware of a serious incident.
What counts as a 'serious incident' under the AI Act?
Article 3(49) defines a serious incident as an incident or malfunction of an AI system that directly or indirectly leads to: (a) the death of a person, or serious harm to a person's health; (b) a serious and irreversible disruption of the management or operation of critical infrastructure; (c) the infringement of obligations under Union law intended to protect fundamental rights; or (d) serious harm to property or the environment. All four categories are covered. Near-misses that do not actually lead to these outcomes do not trigger the formal reporting obligation.
What is the standard reporting deadline for a serious incident?
Article 73(2) sets the standard deadline at 15 days from when the provider (or deployer, where applicable) becomes aware of the serious incident AND has established a causal link between the AI system and the incident (or the reasonable likelihood of such a link). The clock starts when you know about the incident and have connected it to the system — not from deployer awareness alone. For incidents involving potential death of a person, the deadline is 10 days (Art 73(4)). For widespread infringements or critical infrastructure disruptions (Art 3(49)(b)), the deadline is 2 days (Art 73(3)).
Does the post-market monitoring plan have to use the Commission template?
Under current Article 72(3), the Commission was required to adopt an implementing act establishing a mandatory template for the post-market monitoring plan by 2 February 2026. Under the proposed COM(2025) 836 Digital Omnibus, if enacted, this obligation would be changed: the Commission would instead be required to 'publish guidance' — removing the mandatory harmonised template in favour of a more flexible, guidance-based approach. Until 836 is enacted, the implementing act deadline and potential mandatory template remain in force.
Can we submit an incomplete incident report to meet the deadline?
Yes. Article 73(5) explicitly allows providers (or deployers) to submit an initial incomplete report to meet the deadline, followed by a complete report. This prevents the strict reporting timelines from incentivising delay while information is gathered. However, the subsequent complete report must follow without undue delay.

Related compliance guides

AI Provider Obligations

Art 16 — complete provider checklist. Art 16(c) requires the QMS, which in turn requires PMM system and incident reporting procedures.

Quality Management System

Art 17(1)(h) and (i) — the QMS must include post-market monitoring and serious incident reporting as documented procedures.

Technical Documentation

Annex IV — the PMM plan is part of the technical documentation required before market placement (Art 72(3)).

Risk Management System

Art 9 — risk management is iterative; PMM data feeds back into risk reassessment and system updates throughout the lifecycle.

Data Governance

Art 10 — data quality issues discovered through PMM may require dataset updates and re-testing under Art 10 standards.

EU Database Registration

Art 49 — providers must keep their EU database registrations up-to-date, including where PMM identifies changes requiring re-registration.

Is your post-market monitoring system Article 72-compliant?

Regumatrix assesses your monitoring system architecture, PMM plan content, and incident classification process against the full Article 72 and 73 requirements.

Get your Articles 72-73 analysis