If your general-purpose AI model was trained using more than 10²⁵ floating-point operations, it crosses the systemic risk threshold. That triggers four specific obligations beyond the standard GPAI rules — adversarial testing, Union-level risk assessment, incident reporting to the AI Office, and cybersecurity for the model and its infrastructure.
Fine: €15,000,000 or 3% of global annual turnover — whichever is higher
Unlike the €15M/3% fine for high-risk AI systems (where the lower of the two applies for SMEs), Article 101 applies the higher figure. Fines are enforced directly by the European Commission, not by national authorities. Article 101 applies from 2 August 2026. Art 101
Not sure if your model crosses the threshold? Regumatrix checks your system against Article 51, classifies your GPAI risk tier, and returns the exact obligations that apply — in 30 seconds. Check your model free →
There are two routes. The first is automatic — only the threshold count matters. The second gives the Commission discretion. Art 51
A model is presumed to have high-impact capabilities — and therefore systemic risk — when its cumulative training compute exceeds 10²⁵ floating-point operations (FLOPs). This is the current threshold. The Commission can amend it by delegated act as hardware efficiency and algorithmic progress evolve. Art 51(2)
This presumption is rebuttable. A provider whose model crosses the threshold can submit substantiated arguments to the Commission explaining why their specific model does not present systemic risk despite its compute level. The Commission then decides whether to accept or reject those arguments. Art 52(2)–(3)
The Commission can designate a model as systemic risk even if it falls below the compute threshold — acting on its own initiative or on a qualified alert from the scientific panel. Designation is based on the Annex XIII criteria (see below). Art 51(1)(b)
After designation, a provider may request reassessment at the earliest six months after the Commission's decision, submitting new objective reasons not available at the time of designation. Art 52(5)
Once your model meets the compute threshold — or you learn it will — you must notify the Commission without delay, and at the latest within two weeks. The notification must include information demonstrating the threshold has been met. If the Commission discovers a systemic-risk model that was not notified, it may designate it unilaterally. Art 52(1)
These apply in addition to the standard GPAI obligations under Article 53. You cannot satisfy these by pointing to your Article 53 compliance alone. Art 55
You must perform model evaluation using standardised protocols and tools reflecting the state of the art. This must include conducting and documenting adversarial testing — structured attempts to find the model's failure modes, harmful outputs, jailbreak vulnerabilities, and systemic-level harms.
The results of both the evaluation and the adversarial testing must be documented with enough detail to allow the AI Office to assess the adequacy of the process. Until a harmonised standard is published, the GPAI Code of Practice (Art 56) is the primary reference for what "state of the art" testing looks like.
You must assess and mitigate systemic risks at Union level, including their sources. The risk assessment must cover systemic risks arising from:
This is a Union-level obligation — it covers harms that could affect populations, critical infrastructure, or democratic processes across the EU, not just individual users.
You must track, document, and report serious incidents to the AI Office — and to national competent authorities as appropriate — without undue delay. Reports must include:
Unlike the high-risk AI system incident reporting obligation (Art 73), which routes through national authorities, systemic risk GPAI incident reporting goes directly to the AI Office. The GPAI Code of Practice sets out what information a report must contain.
You must ensure an adequate level of cybersecurity protection for two things: the model itself, and the physical infrastructure on which the model runs. This is distinct from the cybersecurity obligation in Annex I product safety law — it applies to the frontier model as an AI system in its own right.
The law does not prescribe a specific security standard. "Adequate" is assessed in context, taking into account the scale of the model and the severity of potential harms if the model or its infrastructure were compromised.
Article 55 obligations add to Article 53 — they do not replace it. Systemic risk GPAI providers must comply with all four standard GPAI obligations:
Critical carve-out removed: Article 53(2) lets open-source GPAI providers skip the Annex XI and Annex XII obligations. But that exemption is disapplied for systemic risk models. If your open-weights model crosses 10²⁵ FLOPs, the full Art 53 package applies. Art 53(2)
When the Commission designates a model as systemic risk on capability grounds rather than compute alone, it applies seven criteria from Annex XIII:
| Criterion | What it looks at |
|---|---|
| (a) Parameters | Number of model parameters |
| (b) Dataset quality/size | Quality or size of training data, measured in tokens |
| (c) Training compute | FLOPs, or proxies: estimated cost, training time, energy consumption |
| (d) Modalities | Text-to-text, text-to-image, multi-modal, biological sequences — state of the art threshold per modality |
| (e) Benchmarks | Zero-shot task performance, adaptability, autonomy, scalability, tool access |
| (f) EU market reach | High internal market impact is presumed when available to at least 10,000 registered business users in the EU |
| (g) End-users | Total number of registered end-users |
Until a harmonised EU standard covering Article 55 is published, you can rely on the GPAI Code of Practice (developed under Article 56) to demonstrate compliance with Art 55 obligations. This is the primary practical compliance mechanism for systemic risk providers right now.
If you neither adhere to an approved Code of Practice nor comply with a harmonised standard, you must demonstrate alternative adequate means of compliance directly to the Commission. Art 55(2)
Article 55 obligations — In force since August 2025
Chapter V of the EU AI Act — including Art 55 systemic risk obligations — entered into force on 2 August 2025. Providers of systemic risk GPAI models have been subject to these requirements since that date.
Article 101 fines — 4 months away
Article 101 (the Commission's fine power over GPAI providers) was separately deferred to 2 August 2026 under Article 113(b). The obligations were active before the fine power — compliance before August 2026 was expected, but the Commission's direct fine enforcement mechanism only activated in August 2026. Art 101
No changes are proposed under COM(2025) 836 or COM(2025) 837 for this topic.
The law sets one quantitative threshold: if the cumulative amount of computation used for training your model exceeds 10²⁵ floating point operations (FLOPs), your model is presumed to have systemic risk under Article 51(2). You are not automatically exempt just because your model was fine-tuned from a base model — the threshold refers to the training compute of the model itself. Beyond the threshold, the Commission can also designate models as systemic risk based on Annex XIII criteria, including large EU user base (10,000+ registered business users), multi-modal capability, and benchmark performance — even if training compute is below the threshold.
Article 55(1)(a) requires providers of systemic risk GPAI models to perform model evaluation using standardised protocols and tools reflecting the state of the art. This includes conducting and documenting adversarial testing — structured attempts to find failure modes, harmful outputs, and exploitable weaknesses in the model. The results must be documented. You can rely on the GPAI Code of Practice (Article 56) to demonstrate what 'standardised protocols' means in practice, until a harmonised standard is published.
Article 55(1)(c) requires incident reporting but does not further define 'serious incident' for GPAI specifically in that article. The AI Office's GPAI Code of Practice and implementing guidance are expected to clarify this. As a baseline, an incident is serious if it results in — or could reasonably result in — significant harm at Union level, exploitation of a known vulnerability, or material failure of the model to operate within expected safety envelopes. Report without undue delay, include the incident description, and include any corrective measures you are taking.
No. Article 53(2) of the EU AI Act gives open-source GPAI model providers an exemption from the detailed technical documentation (Annex XI) and downstream provider information (Annex XII) obligations. However, that exemption is explicitly disapplied for GPAI models with systemic risk. If your open-weights model crosses the 10²⁵ FLOPs threshold, all Art 55 systemic risk obligations apply. The open-source exemption only helps providers of models below the threshold.
Article 55 obligations themselves apply from 2 August 2025 — the date Chapter V entered into force. However, Article 101 (the fine provision for GPAI providers) was specifically deferred and applies from 2 August 2026 (see Article 113(b) of the EU AI Act). The Commission enforces Article 101 — not national market surveillance authorities — which is different from how high-risk AI system fines work under Article 99.
Regumatrix checks your model against Article 51's classification rules and returns your complete GPAI obligation profile — whether you need adversarial testing, incident reporting, and cybersecurity plans under Article 55, or only the standard Article 53 obligations. You get a cited compliance report in around 30 seconds, at no cost.