RegumatrixBeta
GuidesPathfinderAI RightsFreeAbout
Sign inGet Started Free

Reference

  • All Articles
  • Official Text ↗

Compliance Guides

  • Compliance Timeline
  • High-Risk Checklist
  • Healthcare AI
  • HR & Recruitment
  • Financial Services
  • GPAI / Foundation Models
  • View all guides →

Product

  • Risk Pathfinder
  • AI Rights Check
  • Get Started Free
  • About
  • Feedback
  • Contact

Legal

  • Privacy Policy
  • Terms

Regumatrix — AI compliance powered by Regulation (EU) 2024/1689

This tool is informational only and does not constitute legal advice.

Grounded in Regulation (EU) 2024/1689 · verified 4 Apr 2026
HomeComplianceGPAI Systemic Risk
GPAI — Article 55€15M / 3% · 4 months away

GPAI Systemic Risk: What the EU AI Act Requires

If your general-purpose AI model was trained using more than 10²⁵ floating-point operations, it crosses the systemic risk threshold. That triggers four specific obligations beyond the standard GPAI rules — adversarial testing, Union-level risk assessment, incident reporting to the AI Office, and cybersecurity for the model and its infrastructure.

Fine: €15,000,000 or 3% of global annual turnover — whichever is higher

Unlike the €15M/3% fine for high-risk AI systems (where the lower of the two applies for SMEs), Article 101 applies the higher figure. Fines are enforced directly by the European Commission, not by national authorities. Article 101 applies from 2 August 2026. Art 101

Not sure if your model crosses the threshold? Regumatrix checks your system against Article 51, classifies your GPAI risk tier, and returns the exact obligations that apply — in 30 seconds. Check your model free →

How a model is classified as systemic risk

There are two routes. The first is automatic — only the threshold count matters. The second gives the Commission discretion. Art 51

1

The compute threshold — automatic presumption

A model is presumed to have high-impact capabilities — and therefore systemic risk — when its cumulative training compute exceeds 10²⁵ floating-point operations (FLOPs). This is the current threshold. The Commission can amend it by delegated act as hardware efficiency and algorithmic progress evolve. Art 51(2)

This presumption is rebuttable. A provider whose model crosses the threshold can submit substantiated arguments to the Commission explaining why their specific model does not present systemic risk despite its compute level. The Commission then decides whether to accept or reject those arguments. Art 52(2)–(3)

2

Commission designation — capability-based

The Commission can designate a model as systemic risk even if it falls below the compute threshold — acting on its own initiative or on a qualified alert from the scientific panel. Designation is based on the Annex XIII criteria (see below). Art 51(1)(b)

After designation, a provider may request reassessment at the earliest six months after the Commission's decision, submitting new objective reasons not available at the time of designation. Art 52(5)

Notification obligation: 2-week deadline

Once your model meets the compute threshold — or you learn it will — you must notify the Commission without delay, and at the latest within two weeks. The notification must include information demonstrating the threshold has been met. If the Commission discovers a systemic-risk model that was not notified, it may designate it unilaterally. Art 52(1)

The four systemic risk obligations (Article 55)

These apply in addition to the standard GPAI obligations under Article 53. You cannot satisfy these by pointing to your Article 53 compliance alone. Art 55

a

Model evaluation and adversarial testing

Art 55(1)(a)

You must perform model evaluation using standardised protocols and tools reflecting the state of the art. This must include conducting and documenting adversarial testing — structured attempts to find the model's failure modes, harmful outputs, jailbreak vulnerabilities, and systemic-level harms.

The results of both the evaluation and the adversarial testing must be documented with enough detail to allow the AI Office to assess the adequacy of the process. Until a harmonised standard is published, the GPAI Code of Practice (Art 56) is the primary reference for what "state of the art" testing looks like.

b

Systemic risk assessment and mitigation at Union level

Art 55(1)(b)

You must assess and mitigate systemic risks at Union level, including their sources. The risk assessment must cover systemic risks arising from:

  • The development process itself
  • Placing the model on the market
  • Actual use downstream — including foreseeable misuse via the API

This is a Union-level obligation — it covers harms that could affect populations, critical infrastructure, or democratic processes across the EU, not just individual users.

c

Serious incident reporting to the AI Office

Art 55(1)(c)

You must track, document, and report serious incidents to the AI Office — and to national competent authorities as appropriate — without undue delay. Reports must include:

  • Relevant information about the incident
  • Possible corrective measures you are taking or plan to take

Unlike the high-risk AI system incident reporting obligation (Art 73), which routes through national authorities, systemic risk GPAI incident reporting goes directly to the AI Office. The GPAI Code of Practice sets out what information a report must contain.

d

Cybersecurity protection

Art 55(1)(d)

You must ensure an adequate level of cybersecurity protection for two things: the model itself, and the physical infrastructure on which the model runs. This is distinct from the cybersecurity obligation in Annex I product safety law — it applies to the frontier model as an AI system in its own right.

The law does not prescribe a specific security standard. "Adequate" is assessed in context, taking into account the scale of the model and the severity of potential harms if the model or its infrastructure were compromised.

Art 53 base obligations also apply — and the open-source exemption does not

Article 55 obligations add to Article 53 — they do not replace it. Systemic risk GPAI providers must comply with all four standard GPAI obligations:

  • Technical documentation (Annex XI) — for AI Office access on request
  • Information to downstream providers (Annex XII) — so integrators understand capabilities and limits
  • Copyright compliance policy — identifying and respecting rights reservations under Art 4(3) of Directive 2019/790
  • Public training data summary — published according to the AI Office template

Critical carve-out removed: Article 53(2) lets open-source GPAI providers skip the Annex XI and Annex XII obligations. But that exemption is disapplied for systemic risk models. If your open-weights model crosses 10²⁵ FLOPs, the full Art 53 package applies. Art 53(2)

Annex XIII: how the Commission designates below the threshold

When the Commission designates a model as systemic risk on capability grounds rather than compute alone, it applies seven criteria from Annex XIII:

CriterionWhat it looks at
(a) ParametersNumber of model parameters
(b) Dataset quality/sizeQuality or size of training data, measured in tokens
(c) Training computeFLOPs, or proxies: estimated cost, training time, energy consumption
(d) ModalitiesText-to-text, text-to-image, multi-modal, biological sequences — state of the art threshold per modality
(e) BenchmarksZero-shot task performance, adaptability, autonomy, scalability, tool access
(f) EU market reachHigh internal market impact is presumed when available to at least 10,000 registered business users in the EU
(g) End-usersTotal number of registered end-users

Using the GPAI Code of Practice to demonstrate compliance

Until a harmonised EU standard covering Article 55 is published, you can rely on the GPAI Code of Practice (developed under Article 56) to demonstrate compliance with Art 55 obligations. This is the primary practical compliance mechanism for systemic risk providers right now.

If you neither adhere to an approved Code of Practice nor comply with a harmonised standard, you must demonstrate alternative adequate means of compliance directly to the Commission. Art 55(2)

When these rules apply

✓

Article 55 obligations — In force since August 2025

Chapter V of the EU AI Act — including Art 55 systemic risk obligations — entered into force on 2 August 2025. Providers of systemic risk GPAI models have been subject to these requirements since that date.

!

Article 101 fines — 4 months away

Article 101 (the Commission's fine power over GPAI providers) was separately deferred to 2 August 2026 under Article 113(b). The obligations were active before the fine power — compliance before August 2026 was expected, but the Commission's direct fine enforcement mechanism only activated in August 2026. Art 101

No changes are proposed under COM(2025) 836 or COM(2025) 837 for this topic.

Five situations where providers underestimate their exposure

  • ·Training compute calculated from base model only — ignoring continued fine-tuning phases
  • ·Assuming open-source release removes all systemic risk obligations
  • ·10,000 EU business users reached without triggering an Annex XIII review
  • ·Adversarial testing done informally — without standardised protocols or documented results
  • ·Treating infrastructure cybersecurity as IT operations only — not an AI Act compliance obligation
Check your GPAI model's risk tier →

Frequently asked questions

How do I know if my GPAI model has systemic risk?

The law sets one quantitative threshold: if the cumulative amount of computation used for training your model exceeds 10²⁵ floating point operations (FLOPs), your model is presumed to have systemic risk under Article 51(2). You are not automatically exempt just because your model was fine-tuned from a base model — the threshold refers to the training compute of the model itself. Beyond the threshold, the Commission can also designate models as systemic risk based on Annex XIII criteria, including large EU user base (10,000+ registered business users), multi-modal capability, and benchmark performance — even if training compute is below the threshold.

What is adversarial testing and where does it need to be documented?

Article 55(1)(a) requires providers of systemic risk GPAI models to perform model evaluation using standardised protocols and tools reflecting the state of the art. This includes conducting and documenting adversarial testing — structured attempts to find failure modes, harmful outputs, and exploitable weaknesses in the model. The results must be documented. You can rely on the GPAI Code of Practice (Article 56) to demonstrate what 'standardised protocols' means in practice, until a harmonised standard is published.

What counts as a 'serious incident' that must be reported to the AI Office?

Article 55(1)(c) requires incident reporting but does not further define 'serious incident' for GPAI specifically in that article. The AI Office's GPAI Code of Practice and implementing guidance are expected to clarify this. As a baseline, an incident is serious if it results in — or could reasonably result in — significant harm at Union level, exploitation of a known vulnerability, or material failure of the model to operate within expected safety envelopes. Report without undue delay, include the incident description, and include any corrective measures you are taking.

Does the open-source exemption remove systemic risk obligations?

No. Article 53(2) of the EU AI Act gives open-source GPAI model providers an exemption from the detailed technical documentation (Annex XI) and downstream provider information (Annex XII) obligations. However, that exemption is explicitly disapplied for GPAI models with systemic risk. If your open-weights model crosses the 10²⁵ FLOPs threshold, all Art 55 systemic risk obligations apply. The open-source exemption only helps providers of models below the threshold.

When do the fines under Article 101 apply?

Article 55 obligations themselves apply from 2 August 2025 — the date Chapter V entered into force. However, Article 101 (the fine provision for GPAI providers) was specifically deferred and applies from 2 August 2026 (see Article 113(b) of the EU AI Act). The Commission enforces Article 101 — not national market surveillance authorities — which is different from how high-risk AI system fines work under Article 99.

Related guides

GPAI model obligations overviewOpen-source AI exemptionsEU AI Act fines & penaltiesAI provider obligationsIncident reporting & post-market monitoringCOM(2025) 836 — what changes

Does your GPAI model have systemic risk obligations?

Regumatrix checks your model against Article 51's classification rules and returns your complete GPAI obligation profile — whether you need adversarial testing, incident reporting, and cybersecurity plans under Article 55, or only the standard Article 53 obligations. You get a cited compliance report in around 30 seconds, at no cost.

  • ✓ GPAI risk tier — standard GPAI or systemic risk
  • ✓ Article 55 obligation checklist if systemic risk applies
  • ✓ Open-source exemption check — what still applies
  • ✓ Fine exposure under Article 101
  • ✓ Codes of practice compliance pathway
  • ✓ Full report with Article citations, ~30 seconds, no credit card
Get your free compliance report →GPAI overview