RegumatrixBeta
GuidesPathfinderAI RightsFreeAbout
Sign inGet Started Free

Reference

  • All Articles
  • Official Text ↗

Compliance Guides

  • Compliance Timeline
  • High-Risk Checklist
  • Healthcare AI
  • HR & Recruitment
  • Financial Services
  • GPAI / Foundation Models
  • View all guides →

Product

  • Risk Pathfinder
  • AI Rights Check
  • Get Started Free
  • About
  • Feedback
  • Contact

Legal

  • Privacy Policy
  • Terms

Regumatrix — AI compliance powered by Regulation (EU) 2024/1689

This tool is informational only and does not constitute legal advice.

Grounded in Regulation (EU) 2024/1689 · verified 4 Apr 2026
HomeComplianceRisk Management System
High-risk AI obligation · Article 9Up to €15M / 3% · 4 months away

EU AI Act Risk Management System

Article 9 does not ask you to file a risk report and move on. It requires a risk management system: a continuous, documented, iterative process that runs for the entire lifecycle of your high-risk AI system. This guide explains the mandatory four-step process, what risks are actually in scope, how the mitigation hierarchy works, and what your testing must prove before you can place the system on the market.

Penalty: up to €15,000,000 or 3% of global annual turnover

Failing to establish, implement or maintain a compliant risk management system is a violation of Art 9, caught by Art 99(4). The fine ceiling is whichever is higher — €15 million or 3% of total worldwide annual turnover. For SMEs and startups, Art 99(6) instead applies the lower figure.

A risk management system that exists on paper but has not been updated since pre-market deployment is not a compliant system. The law requires regular systematic review throughout the lifecycle — post-market monitoring data must feed back into the risk evaluation process.

Does your high-risk AI system need an Art 9 risk management system?

Regumatrix checks your system against Annex III and returns your risk tier, exact obligations, and a cited compliance report — including what your risk management system must cover.

Check your system

The core obligation — Article 9(1) and 9(2)

Art 9(1) states that a risk management system shall be established, implemented, documented and maintained in relation to high-risk AI systems. All four verbs matter: you must create it, run it, write it down, and keep it current.

Art 9(2) then defines what that system is: a continuous iterative process planned and run throughout the entire lifecycle of a high-risk AI system, requiring regular systematic review and updating. The lifecycle includes not just development, but the entire period during which the system is in service.

The four mandatory steps — Article 9(2)(a)–(d)

The risk management system must comprise the following four steps. These are not optional choices — the word used is “comprise”, meaning all four are required elements.

1

Identify and analyse risks

Art 9(2)(a)

Identify and analyse the known and the reasonably foreseeable risks that the high-risk AI system can pose to health, safety or fundamental rights when used in accordance with its intended purpose. Both categories matter — you cannot limit the analysis to documented, already-manifested risks.

2

Estimate and evaluate risks — including misuse

Art 9(2)(b)

Estimate and evaluate the risks that may emerge both under intended purpose and under conditions of reasonably foreseeable misuse. This means you must actively consider how the system could be misused — not assume that all deployers will follow your instructions perfectly. A reasonably foreseeable misuse is one that a careful analyst could have anticipated, not just one that has already been observed.

3

Evaluate risks from post-market data

Art 9(2)(c)

Evaluate other risks possibly arising, based on analysis of data gathered from the post-market monitoring system referred to in Art 72. This is the loop that makes the system genuinely iterative: real-world operational data must feed back into the risk assessment. Risks that only become visible after deployment must be captured and addressed.

4

Adopt appropriate and targeted risk management measures

Art 9(2)(d)

Adopt measures designed to address the risks identified under step 1. The measures must be appropriate and targeted — broad, generic risk statements without specific corresponding measures do not satisfy this requirement. The measures must also account for the combined effect of all Section 2 requirements applied together, with a view to minimising risks while achieving an appropriate balance (Art 9(4)).

Which risks are in scope? — Article 9(3)

Art 9(3) contains an important scoping rule: the risk management system covers only those risks which may be reasonably mitigated or eliminated through the development or design of the high-risk AI system, or through the provision of adequate technical information.

In practice, this means two categories of risk are in scope:

  • Risks that can be reduced or eliminated through technical design choices — architecture, model selection, guardrails, output filtering, fallback behaviour
  • Risks that cannot be designed out but can be substantially reduced through adequate technical information — instructions for use, known limitations, deployment context requirements, deployer training specifications

Risks that are entirely outside your technical influence — such as wholly unpredictable downstream second-order social effects — fall outside the Art 9(3) scope. However, this scoping rule should not be read opportunistically: if a risk could have been mitigated by a different design choice, it is in scope.

The three-tier mitigation hierarchy — Article 9(5)

Art 9(5) requires that the residual risk associated with each hazard — and the overall residual risk of the system — is judged to be acceptable. To reach that point, the law specifies a three-tier mitigation sequence. The tiers are applied in order: you must genuinely attempt each tier before relying on the next.

Tier 1 — preferredArt 9(5)(a)

Eliminate or reduce through design

Eliminate or reduce identified risks as far as technically feasible through adequate design and development of the high-risk AI system. This is the preferred approach — redesign the system so the risk does not exist, or exists at a substantially reduced level. The qualifier “technically feasible” means that if redesign is possible, it must be attempted; cost or convenience alone does not make it “infeasible.”

Tier 2 — where risks remainArt 9(5)(b)

Mitigate and control what cannot be designed out

Where risks cannot be eliminated through Tier 1, implement adequate mitigation and control measures — guardrails, usage restrictions, access controls, monitoring mechanisms. The mitigation must be genuine: a document acknowledging the risk without a corresponding technical or procedural control does not satisfy this tier.

Tier 3 — residual riskArt 9(5)(c)

Inform and train deployers for remaining residual risk

Provide the information required under Art 13 — instructions for use, limitations, known risk scenarios — and, where appropriate, training to deployers. In doing so, the law requires due consideration of the deployer's technical knowledge, experience, education, and the training they can be expected to have, as well as the presumable context of use.

Tier 3 addresses residual risk only — risks that genuinely cannot be further reduced by Tiers 1 or 2. It cannot be used as a substitute for design-based mitigation.

Testing obligations — Articles 9(6) to 9(8)

Art 9(6) requires that high-risk AI systems are tested for the purpose of identifying the most appropriate and targeted risk management measures, and to verify that the system performs consistently for its intended purpose and complies with all Section 2 requirements.

Art 9(8)

Prior defined metrics and probabilistic thresholds

Testing must be carried out against prior defined metrics and probabilistic thresholds appropriate to the intended purpose. This means acceptable performance criteria must be defined before testing begins — not determined after results are seen. Threshold values confirmed by reverse engineering test results are not compliant.

Art 9(8)

Mandatory pre-market testing

Testing shall be performed at any time throughout the development process and in any event, prior to being placed on the market or put into service. There are no exceptions to this pre-market requirement — post-deployment testing does not satisfy it.

Art 9(7)

Real-world condition testing

Testing procedures may include testing in real-world conditions in accordance with Art 60. Art 60 establishes the regulatory framework for AI regulatory sandboxes and real-world testing — including the conditions under which real-world testing can be authorised and what safeguards apply to test subjects.

Vulnerable groups — Article 9(9)

When implementing the risk management system, providers must give specific consideration to whether their high-risk AI system is likely to have an adverse impact on persons under the age of 18 — explicitly named in the Act — and, as appropriate, other vulnerable groups.

This is not a separate obligation — it is a lens that must be applied when running the four-step process in Art 9(2). In practice, this means:

  • Specifically identifying risk scenarios involving children as users, subjects, or affected parties — even if they are not the intended users
  • Evaluating misuse scenarios (Art 9(2)(b)) with consideration of how the system could interact with or affect children or other vulnerable groups
  • Ensuring testing data and performance metrics reflect the system's behaviour for these populations
  • Including specific information in deployer instructions where the system may encounter or affect vulnerable groups

Integration with other risk frameworks — Article 9(10)

Art 9(10) provides practical relief for providers already operating under other Union law risk management requirements: where a provider of a high-risk AI system is also subject to internal risk management processes under other relevant Union law, the Art 9 obligations may be part of, or combined with, those existing procedures.

Common integration scenarios:

  • GDPR Data Protection Impact Assessments — where a high-risk AI system processes personal data, the existing DPIA process required by GDPR Art 35 can be structured to satisfy the Art 9 risk identification and evaluation requirements simultaneously
  • Medical Device Regulation (MDR) / IVDR — AI systems classified as medical devices already operate under rigorous risk management standards; the MDR risk management system can be extended to encompass Art 9 elements
  • Financial sector obligations — credit institutions, insurers, and investment firms operating under EBA, EIOPA or ESMA guidelines may already maintain governance and risk frameworks that overlap substantially with Art 9
  • Machinery Regulation / Product Safety — for AI systems embedded in safety-critical hardware, existing product risk management processes can be integrated

Note: integration does not mean the Art 9 requirements can be watered down. The four steps and the mitigation hierarchy must still be addressed; they may simply be documented within an existing framework rather than a standalone document.

Is your risk management system genuinely iterative — or a one-time pre-market exercise?

The most common risk management failures in high-risk AI are not about missing the obligation entirely — they are about treating it as a document to produce once, not a process to run continuously. These signals suggest non-compliance:

  • Risk register last updated at the time of market placement — no mechanism to incorporate post-market monitoring data
  • Misuse scenarios limited to "user reads and ignores the manual" — no structured analysis of plausible misuse patterns
  • Risk mitigations jump straight to "we document it in the instructions" without demonstrating that design-based elimination was genuinely attempted first
  • Testing thresholds determined after results were obtained — no pre-defined acceptable performance criteria in the record
  • No explicit consideration of vulnerable groups — risk assessment implicitly assumes the idealised adult user in an ideal context
  • Residual risk marked “acceptable” with no documented rationale for why it has been judged acceptable
Check your compliance posture — 3 free analyses

No changes are proposed under COM(2025) 836 or COM(2025) 837 for the risk management system obligations in Article 9.

Frequently asked questions

What is a risk management system under the EU AI Act?

Under Article 9, a risk management system is a formal, documented process that must be established, implemented, and maintained throughout the entire lifecycle of a high-risk AI system. It is not a one-time pre-market exercise — the law describes it as a 'continuous iterative process' that requires regular systematic review and updating. It comprises four mandatory steps: identifying and analysing known and reasonably foreseeable risks; estimating and evaluating risks under intended use and foreseeable misuse; evaluating risks arising from post-market monitoring data; and adopting appropriate and targeted mitigation measures.

Which risks does Article 9 cover — does it cover every conceivable risk?

No. Article 9(3) contains a scoping rule: the risk management system covers only risks that may be reasonably mitigated or eliminated through development or design of the high-risk AI system, or through provision of adequate technical information. Risks that are entirely outside the provider's technical control — for example, unpredictable downstream social effects — fall outside the Article 9 scope. The obligation is to identify, document, and address what you can technically influence.

What is the correct order for applying risk management measures?

Article 9(5) establishes a three-tier sequence. First: eliminate or reduce identified risks through design and development, as far as technically feasible. Second: where risks cannot be eliminated by design, implement adequate mitigation and control measures. Third: provide the information required under Article 13 and, where appropriate, training to deployers. These tiers are sequential — you must genuinely attempt elimination before falling back to mitigation, and genuinely attempt mitigation before relying on information and training.

Does the risk management system need to cover misuse scenarios?

Yes, explicitly. Article 9(2)(b) requires you to estimate and evaluate risks that may emerge when the high-risk AI system is used under conditions of 'reasonably foreseeable misuse' — not just under its intended purpose. Reasonably foreseeable misuse means uses you could anticipate with reasonable diligence, even if they fall outside the system's documented intended purpose. You cannot limit your risk assessment to the narrow scenario of a fully compliant, perfectly trained deployer using the system exactly as documented.

What are the testing obligations under Article 9?

Article 9(6) requires testing for the purpose of identifying the most appropriate and targeted risk management measures, and to verify consistent performance and compliance with Section 2 requirements. Article 9(8) requires that testing is carried out against 'prior defined metrics and probabilistic thresholds' appropriate to the intended purpose — meaning acceptable performance criteria must be defined before testing begins, not reverse-engineered from results. Testing must be completed prior to market placement or putting into service; it may also occur at any earlier stage of development. Article 9(7) allows testing in real-world conditions under Article 60 where appropriate.

Related guides

Human Oversight (Article 14)

Five capabilities oversight persons must be enabled to exercise

Technical Documentation (Article 11)

What 17+ elements your technical documentation must cover

Conformity Assessment Guide

Art 43 — self-assessment vs notified body, when each applies

AI Provider Obligations

Complete checklist of all high-risk AI provider requirements

Data Governance (Article 10)

Training, validation and testing dataset obligations for high-risk AI

EU AI Act Fines & Penalties

All four penalty tiers under Art 99 — amounts, who pays, SME rules

Get a complete Article 9 compliance check for your system

Regumatrix analyses your system, confirms whether Art 9 applies, and generates an 8-section cited compliance report covering your risk management obligations, vulnerability considerations, testing requirements, and fine exposure under Art 99(4). No credit card. Results in about 30 seconds.

Start free — 3 analyses included No credit card · Results in ~30 seconds