Article 9 does not ask you to file a risk report and move on. It requires a risk management system: a continuous, documented, iterative process that runs for the entire lifecycle of your high-risk AI system. This guide explains the mandatory four-step process, what risks are actually in scope, how the mitigation hierarchy works, and what your testing must prove before you can place the system on the market.
Penalty: up to €15,000,000 or 3% of global annual turnover
Failing to establish, implement or maintain a compliant risk management system is a violation of Art 9, caught by Art 99(4). The fine ceiling is whichever is higher — €15 million or 3% of total worldwide annual turnover. For SMEs and startups, Art 99(6) instead applies the lower figure.
A risk management system that exists on paper but has not been updated since pre-market deployment is not a compliant system. The law requires regular systematic review throughout the lifecycle — post-market monitoring data must feed back into the risk evaluation process.
Does your high-risk AI system need an Art 9 risk management system?
Regumatrix checks your system against Annex III and returns your risk tier, exact obligations, and a cited compliance report — including what your risk management system must cover.
Art 9(1) states that a risk management system shall be established, implemented, documented and maintained in relation to high-risk AI systems. All four verbs matter: you must create it, run it, write it down, and keep it current.
Art 9(2) then defines what that system is: a continuous iterative process planned and run throughout the entire lifecycle of a high-risk AI system, requiring regular systematic review and updating. The lifecycle includes not just development, but the entire period during which the system is in service.
The risk management system must comprise the following four steps. These are not optional choices — the word used is “comprise”, meaning all four are required elements.
Identify and analyse the known and the reasonably foreseeable risks that the high-risk AI system can pose to health, safety or fundamental rights when used in accordance with its intended purpose. Both categories matter — you cannot limit the analysis to documented, already-manifested risks.
Estimate and evaluate the risks that may emerge both under intended purpose and under conditions of reasonably foreseeable misuse. This means you must actively consider how the system could be misused — not assume that all deployers will follow your instructions perfectly. A reasonably foreseeable misuse is one that a careful analyst could have anticipated, not just one that has already been observed.
Evaluate other risks possibly arising, based on analysis of data gathered from the post-market monitoring system referred to in Art 72. This is the loop that makes the system genuinely iterative: real-world operational data must feed back into the risk assessment. Risks that only become visible after deployment must be captured and addressed.
Adopt measures designed to address the risks identified under step 1. The measures must be appropriate and targeted — broad, generic risk statements without specific corresponding measures do not satisfy this requirement. The measures must also account for the combined effect of all Section 2 requirements applied together, with a view to minimising risks while achieving an appropriate balance (Art 9(4)).
Art 9(3) contains an important scoping rule: the risk management system covers only those risks which may be reasonably mitigated or eliminated through the development or design of the high-risk AI system, or through the provision of adequate technical information.
In practice, this means two categories of risk are in scope:
Risks that are entirely outside your technical influence — such as wholly unpredictable downstream second-order social effects — fall outside the Art 9(3) scope. However, this scoping rule should not be read opportunistically: if a risk could have been mitigated by a different design choice, it is in scope.
Art 9(5) requires that the residual risk associated with each hazard — and the overall residual risk of the system — is judged to be acceptable. To reach that point, the law specifies a three-tier mitigation sequence. The tiers are applied in order: you must genuinely attempt each tier before relying on the next.
Eliminate or reduce identified risks as far as technically feasible through adequate design and development of the high-risk AI system. This is the preferred approach — redesign the system so the risk does not exist, or exists at a substantially reduced level. The qualifier “technically feasible” means that if redesign is possible, it must be attempted; cost or convenience alone does not make it “infeasible.”
Where risks cannot be eliminated through Tier 1, implement adequate mitigation and control measures — guardrails, usage restrictions, access controls, monitoring mechanisms. The mitigation must be genuine: a document acknowledging the risk without a corresponding technical or procedural control does not satisfy this tier.
Provide the information required under Art 13 — instructions for use, limitations, known risk scenarios — and, where appropriate, training to deployers. In doing so, the law requires due consideration of the deployer's technical knowledge, experience, education, and the training they can be expected to have, as well as the presumable context of use.
Tier 3 addresses residual risk only — risks that genuinely cannot be further reduced by Tiers 1 or 2. It cannot be used as a substitute for design-based mitigation.
Art 9(6) requires that high-risk AI systems are tested for the purpose of identifying the most appropriate and targeted risk management measures, and to verify that the system performs consistently for its intended purpose and complies with all Section 2 requirements.
Testing must be carried out against prior defined metrics and probabilistic thresholds appropriate to the intended purpose. This means acceptable performance criteria must be defined before testing begins — not determined after results are seen. Threshold values confirmed by reverse engineering test results are not compliant.
Testing shall be performed at any time throughout the development process and in any event, prior to being placed on the market or put into service. There are no exceptions to this pre-market requirement — post-deployment testing does not satisfy it.
Testing procedures may include testing in real-world conditions in accordance with Art 60. Art 60 establishes the regulatory framework for AI regulatory sandboxes and real-world testing — including the conditions under which real-world testing can be authorised and what safeguards apply to test subjects.
When implementing the risk management system, providers must give specific consideration to whether their high-risk AI system is likely to have an adverse impact on persons under the age of 18 — explicitly named in the Act — and, as appropriate, other vulnerable groups.
This is not a separate obligation — it is a lens that must be applied when running the four-step process in Art 9(2). In practice, this means:
Art 9(10) provides practical relief for providers already operating under other Union law risk management requirements: where a provider of a high-risk AI system is also subject to internal risk management processes under other relevant Union law, the Art 9 obligations may be part of, or combined with, those existing procedures.
Common integration scenarios:
Note: integration does not mean the Art 9 requirements can be watered down. The four steps and the mitigation hierarchy must still be addressed; they may simply be documented within an existing framework rather than a standalone document.
The most common risk management failures in high-risk AI are not about missing the obligation entirely — they are about treating it as a document to produce once, not a process to run continuously. These signals suggest non-compliance:
No changes are proposed under COM(2025) 836 or COM(2025) 837 for the risk management system obligations in Article 9.
Under Article 9, a risk management system is a formal, documented process that must be established, implemented, and maintained throughout the entire lifecycle of a high-risk AI system. It is not a one-time pre-market exercise — the law describes it as a 'continuous iterative process' that requires regular systematic review and updating. It comprises four mandatory steps: identifying and analysing known and reasonably foreseeable risks; estimating and evaluating risks under intended use and foreseeable misuse; evaluating risks arising from post-market monitoring data; and adopting appropriate and targeted mitigation measures.
No. Article 9(3) contains a scoping rule: the risk management system covers only risks that may be reasonably mitigated or eliminated through development or design of the high-risk AI system, or through provision of adequate technical information. Risks that are entirely outside the provider's technical control — for example, unpredictable downstream social effects — fall outside the Article 9 scope. The obligation is to identify, document, and address what you can technically influence.
Article 9(5) establishes a three-tier sequence. First: eliminate or reduce identified risks through design and development, as far as technically feasible. Second: where risks cannot be eliminated by design, implement adequate mitigation and control measures. Third: provide the information required under Article 13 and, where appropriate, training to deployers. These tiers are sequential — you must genuinely attempt elimination before falling back to mitigation, and genuinely attempt mitigation before relying on information and training.
Yes, explicitly. Article 9(2)(b) requires you to estimate and evaluate risks that may emerge when the high-risk AI system is used under conditions of 'reasonably foreseeable misuse' — not just under its intended purpose. Reasonably foreseeable misuse means uses you could anticipate with reasonable diligence, even if they fall outside the system's documented intended purpose. You cannot limit your risk assessment to the narrow scenario of a fully compliant, perfectly trained deployer using the system exactly as documented.
Article 9(6) requires testing for the purpose of identifying the most appropriate and targeted risk management measures, and to verify consistent performance and compliance with Section 2 requirements. Article 9(8) requires that testing is carried out against 'prior defined metrics and probabilistic thresholds' appropriate to the intended purpose — meaning acceptable performance criteria must be defined before testing begins, not reverse-engineered from results. Testing must be completed prior to market placement or putting into service; it may also occur at any earlier stage of development. Article 9(7) allows testing in real-world conditions under Article 60 where appropriate.
Human Oversight (Article 14)
Five capabilities oversight persons must be enabled to exercise
Technical Documentation (Article 11)
What 17+ elements your technical documentation must cover
Conformity Assessment Guide
Art 43 — self-assessment vs notified body, when each applies
AI Provider Obligations
Complete checklist of all high-risk AI provider requirements
Data Governance (Article 10)
Training, validation and testing dataset obligations for high-risk AI
EU AI Act Fines & Penalties
All four penalty tiers under Art 99 — amounts, who pays, SME rules
Regumatrix analyses your system, confirms whether Art 9 applies, and generates an 8-section cited compliance report covering your risk management obligations, vulnerability considerations, testing requirements, and fine exposure under Art 99(4). No credit card. Results in about 30 seconds.