RegumatrixBeta
GuidesPathfinderAI RightsFreeAbout
Sign inGet Started Free

Reference

  • All Articles
  • Official Text ↗

Compliance Guides

  • Compliance Timeline
  • High-Risk Checklist
  • Healthcare AI
  • HR & Recruitment
  • Financial Services
  • GPAI / Foundation Models
  • View all guides →

Product

  • Risk Pathfinder
  • AI Rights Check
  • Get Started Free
  • About
  • Feedback
  • Contact

Legal

  • Privacy Policy
  • Terms

Regumatrix — AI compliance powered by Regulation (EU) 2024/1689

This tool is informational only and does not constitute legal advice.

Grounded in Regulation (EU) 2024/1689 · verified 4 Apr 2026
HomeComplianceWhen You Become the Provider
Article 25 — Value Chain Responsibilities€15M / 3% penalty exposure3 provider-status triggersIn force 2 Aug 2026

EU AI Act Article 25: When You Become the AI Provider

You did not build the AI. You bought it, branded it, integrated it, or modified it. But under Article 25 of the EU AI Act, three specific actions transfer full provider status — and all of the obligations that come with it — to you. Product manufacturers, system integrators, rebranders, and anyone who makes a substantial modification to a high-risk AI system needs to understand this provision. The consequence of getting it wrong is the same as if you had built the AI from scratch: up to €15 million or 3% of global annual turnover.

The original developer is no longer responsible — you are

When Article 25(1) triggers, the previous provider ceases to be the provider for that AI system under the regulation. Responsibility — and full liability for any breach — transfers to you. There is no shared liability in this provision. You inherit the full Art 16 obligation chain: quality management, technical documentation, conformity assessment (including notified body certification if required), CE marking, EU database registration, post-market monitoring, and more.

Has your company rebranded, substantially modified, or integrated a third-party AI system?

Regumatrix maps your exact role — provider, deployer, or distributor — and generates an obligation checklist in about 30 seconds. Built on the full EU AI Act text.

Check your role — 3 free analyses

The three Art 25(1) triggers: when you become the provider

Article 25(1) applies to any distributor, importer, deployer, or other third party in the supply chain. If ANY of the following three conditions applies to you, you are now a provider under the EU AI Act — regardless of any contract with the original developer.

A

Rebranding — placing your own name or trademark on the system

Art 25(1)(a)

If you place your name, registered trade name, or trademark on a high-risk AI system that has already been placed on the market or put into service, you become the provider of that system. This is the white-labelling trap: companies that take a third-party AI and market it under their own brand are providers — not distributors.

Examples: A European HR software company resells a US-built CV screening AI under their own logo. A medical device manufacturer integrates a diagnostic AI from a third party but presents it to hospitals as their own “ intelligent diagnostic module.” A retail analytics platform wraps an AI recommender engine from another vendor and sells it under their brand name.

Contract exception: Contractual arrangements can allocate specific obligations elsewhere, but the default position under the AI Act is that the entity whose name is on the system is the provider. Contracts cannot override the regulatory definition.

B

Substantial modification — materially changing a high-risk system

Art 25(1)(b)

Making a substantial modification to a high-risk AI system already on the market — such that it remains high-risk under Art 6 — makes you the new provider. Article 3(23) defines substantial modification as a modification not foreseen or planned by the provider in the original risk assessment, which has the potential to affect compliance with Chapter III Section 2 requirements or which changes the intended purpose.

Likely IS a substantial modification:

  • ▸ Retraining the AI model on different datasets
  • ▸ Changing the decision thresholds or output logic
  • ▸ Modifying the architecture (e.g., adding layers or components not in the provider’s design)
  • ▸ Integrating the AI with new sensors or data sources not covered by the original risk assessment
  • ▸ Altering the system in ways explicitly prohibited by the provider’s instructions for use

Likely NOT a substantial modification:

  • ▸ Security patches and bug fixes that do not affect AI outputs
  • ▸ User interface or display changes that do not change system behaviour
  • ▸ Configuration within the provider’s pre-authorised parameter ranges
  • ▸ Translation of instructions for use
C

Purpose upgrade — making a non-high-risk system high-risk

Art 25(1)(c)

Modifying the intended purpose of an AI system — including a general-purpose AI model — that is already on the market in a way that makes it high-risk under Art 6 makes you the provider of the newly high-risk system.

Common scenario — GPAI deployed for Annex III use:

A general-purpose language model is licensed for general business tasks (not high-risk). Your company integrates it — via API or fine-tuning — into a candidate screening product that ranks applicants for employment. You have changed the intended purpose to employment/HR screening (Annex III, category 4). The original GPAI provider is no longer the provider of this specific deployment. You are.

The same logic applies to GPAI models used for credit scoring, healthcare diagnostics, educational assessment, access to public services, or biometric identification — all Annex III categories that would be high-risk under Art 6.

Product manufacturers — the automatic provider rule (Art 25(3))

Article 25(3) creates a special rule for manufacturers of products covered by the “legacy product safety” regulations listed in Section A of Annex I. These manufacturers are automatically deemed to be the provider of the embedded high-risk AI system — even without triggering Art 25(1) — in two circumstances.

Section A Annex I product manufacturers automatically become the AI provider if:

a

The high-risk AI system is placed on the market together with the manufacturer’s product under the manufacturer’s name or trademark — for example, a factory machine sold with an integrated AI inspection system under the machine manufacturer’s brand.

b

The high-risk AI system is put into service under the manufacturer’s name or trademark after the product has been placed on the market — for example, an AI software update pushed to an existing device fleet under the manufacturer’s label.

Which products are covered by Annex I Section A?

Section A of Annex I lists these EU regulations/directives. If your product is governed by any of these, Art 25(3) applies to embedded high-risk AI:

  • ▸ Machinery Regulation (EU) 2023/1230
  • ▸ Medical Devices Regulation (EU) 2017/745
  • ▸ In Vitro Diagnostic Devices Regulation (EU) 2017/746
  • ▸ Radio Equipment Directive (EU) 2014/53
  • ▸ Civil Aviation Regulation (EU) 2018/1139
  • ▸ Marine Equipment Directive 2014/90/EU
  • ▸ Interoperability of rail system Directive 2016/797
  • ▸ Motor vehicles type-approval (EU) 2019/2144
  • ▸ Agricultural & forestry vehicles (EU) 167/2013
  • ▸ Two- or three-wheel vehicles (EU) 168/2013
  • ▸ Personal protective equipment Regulation (EU) 2016/425

Important: Section B removed from final law

Some earlier EU AI Act drafts included Annex I Section B (safety components of machinery). The final enacted law only contains Section A. The automatic manufacturer-as-provider rule does NOT extend to safety components of machinery covered by the old Machinery Directive. Only Section A product categories listed above are in scope for Art 25(3).

What happens to the original provider? — Article 25(2)

Once any of the three Art 25(1) triggers fires — or once Art 25(3) applies to a product manufacturer — the original provider ceases to be the provider of that specific AI system for AI Act purposes.

The original provider must still cooperate

Article 25(2) requires the original provider to cooperate with the new provider and provide all information and technical access necessary for the new provider to fulfil their Art 16 obligations. This includes:

  • ▸ Access to original technical documentation
  • ▸ Training data descriptions and data governance information
  • ▸ The original risk management file and conformity assessment documentation
  • ▸ Details of the original conformity assessment procedure and any notified body certificate
  • ▸ Post-market monitoring data collected to date

Exception — “not to be changed into high-risk” specification

Where the original provider has explicitly specified in their instructions for use, technical documentation, or contractual arrangements that the AI system is not intended to be changed into a high-risk system, the original provider is not obligated to cooperate with the new provider once Art 25(1) is triggered. This provides a pathway for original providers to protect themselves from liability for modifications made by downstream parties — but requires the specification to be clear and explicit in advance.

IP and trade secrets — Article 25(5)

Article 25(2) and (3) cooperation obligations are expressly without prejudice to intellectual property rights, confidential business information, and trade secrets. Original providers can decline to share materials that would genuinely compromise IP or trade secrets — but they cannot use this exception to avoid all cooperation.

The written agreement requirement — Article 25(4)

Article 25(4) requires providers and third-party AI component suppliers to establish a written agreement that covers all information, capabilities, and technical access the provider needs to comply with the AI Act. This applies throughout the AI supply chain — not just when provider status transfers.

What the written agreement must cover (minimum)

  • ▸ Access to technical capabilities and performance data needed to meet Chapter III Section 2 requirements
  • ▸ Information about training data, test results, risk assessments, and limitations relevant to the AI Act obligations
  • ▸ Cooperation in post-market monitoring (Art 72) and incident reporting obligations
  • ▸ Technical access needed by conformity assessment bodies and market surveillance authorities

AI Office model contract terms

The EU AI Office may develop voluntary model contract terms for use by parties establishing these agreements. These are not yet published but are expected to provide standard clauses for technical access, documentation sharing, and cooperation in regulatory oversight.

Open-source exception

Article 25(4) does not apply to tools or components provided under a free and open-source licence — with the exception of GPAI models with systemic risk and GPAI models not released under a free and open-source licence. Pure open-source components are excluded from the written agreement requirement.

The full obligation chain you inherit — Article 16

When you become the provider under Art 25, all of the following Article 16 obligations apply to you — the same obligations as the original developer. This is not a reduced set of “downstream” obligations; it is the full provider requirement list.

Article 16 obligations now applying to you:

Art 9

Risk management system

Establish, implement, document, and maintain a continuous risk management system for the life of the AI system.

Art 10

Data governance

Implement data governance and management practices covering training, validation, and testing data quality.

Art 11

Technical documentation

Draw up technical documentation in accordance with Annex IV before market placement.

Art 12

Record-keeping / logging

Ensure the AI system has automatic logging capability to maintain records of operation.

Art 13

Transparency

Ensure the system is transparent and provides sufficient information for deployers to interpret outputs and use it appropriately.

Art 14

Human oversight

Design and develop the system to enable effective human oversight, including ability to stop, override, or disregard outputs.

Art 15

Accuracy, robustness, cybersecurity

Achieve and maintain appropriate levels of accuracy, robustness, and cybersecurity throughout the lifecycle.

Art 17

Quality management system

Put in place a quality management system covering all Art 17(1) aspects: conformity procedures, technical documentation, post-market monitoring, corrective actions.

Art 43

Conformity assessment

Complete the relevant conformity assessment procedure (self-assessment or notified body) before market placement.

Art 47

EU declaration of conformity

Draw up and sign an EU declaration of conformity for each high-risk AI system.

Art 48

CE marking

Affix the CE marking in accordance with Article 48 and Annex V before market placement.

Art 49

EU database registration

Register the AI system in the EU AI database before placing on the market or putting into service.

Art 72

Post-market monitoring

Establish and maintain a post-market monitoring system — plan, collect and analyse data, and take corrective action.

Overwhelmed by the Art 16 obligation chain?

Regumatrix generates a prioritised, timeline-based compliance plan for your specific situation — telling you exactly which obligations apply, which require third-party certification, and what deadlines you are working to. 3 free analyses to get started.

Generate my compliance plan

No changes are proposed under COM(2025) 836 or COM(2025) 837 for Article 25 (responsibilities along the AI value chain). The obligations described on this page reflect the enacted text and apply in full from 2 August 2026.

Frequently asked questions

What does Article 25 of the EU AI Act do?▾

Article 25 of the EU AI Act defines when a person or company that is not the original developer becomes the 'provider' of a high-risk AI system — and therefore inherits all provider obligations under Article 16. It creates three triggers: (a) placing your own name or trademark on a high-risk AI system already on the market; (b) making a substantial modification to a high-risk AI system; and (c) modifying the intended purpose of an AI system so that it becomes high-risk. These provisions exist to prevent responsibility gaps in AI supply chains where an AI system passes through many hands before reaching its end user.

When does a product manufacturer automatically become the AI provider under Article 25(3)?▾

Article 25(3) applies specifically to manufacturers of products covered by Section A of Annex I — the 'legacy product safety' regulations such as the Machinery Regulation, Medical Devices Regulation, Radio Equipment Directive, and similar. A product manufacturer governed by these Annex I Section A laws is automatically deemed the AI provider if: (a) the high-risk AI system is placed on the market or put into service together with the manufacturer's product and under the manufacturer's name or trademark, or (b) the high-risk AI system is put into service under the manufacturer's name or trademark after the product has been placed on the market. This rule does NOT apply to Annex I Section B (safety components of machinery) — that section was removed in the final law. If you manufacture cars, medical devices, household appliances, or radio equipment that contains a built-in high-risk AI system, Article 25(3) likely makes you the provider.

What does 'substantial modification' mean under Article 25?▾

The EU AI Act does not provide a precise numeric or technical definition of 'substantial modification' in Article 25 itself, but Article 3(23) defines it as 'a modification of an AI system after its being placed on the market or put into service which is not foreseen or planned by the provider in the original risk assessment, and which has the potential to affect the AI system's compliance with the requirements in Chapter III Section 2, or which results in a modification to the intended purpose for which the AI system has been assessed.' In practice, this includes: retraining the AI model on new data; changing the output logic, thresholds, or decision architecture; integrating the AI with additional components that the original provider did not anticipate; or altering the system in ways the provider's instructions for use explicitly exclude. Minor bug fixes, user interface changes, or security patches that do not affect the AI logic itself are generally not substantial modifications.

What obligations apply when you become the provider under Article 25?▾

When Article 25 makes you the provider, all obligations in Article 16 apply to you — the same obligations that apply to the original developer. These include: establishing and maintaining a quality management system (Article 17); preparing technical documentation (Article 11, Annex IV); implementing a post-market monitoring system (Article 72); registering the AI system in the EU AI database (Article 49); applying for conformity assessment by a notified body if required (Article 43); affixing CE marking under Article 48; issuing the EU declaration of conformity (Article 47); and implementing corrective actions and reporting to authorities. This is the full high-risk AI provider obligation chain. Simply rebranding or modifying an AI system does not exempt you from any of these.

Does Article 25 affect companies that use GPAI models via API?▾

Potentially yes — Article 25(1)(c) makes you the provider if you modify the intended purpose of an AI system (including a general-purpose AI system) such that it becomes high-risk under Article 6. If you integrate a GPAI model via API and use it specifically for a purpose that falls within Annex III (e.g., HR screening, credit scoring, biometric identification, or education/vocational assessment), you may have changed the intended purpose of that AI system into a high-risk use case. However, merely using a GPAI model via API without changing its intended purpose — and remaining within the scope of the provider's instructions for use — generally does not trigger Article 25. The key question is whether your specific deployment changes the intended purpose into a high-risk one.

What must the original provider do when they are replaced under Article 25?▾

Under Article 25(2), once Article 25(1) is triggered, the original provider ceases to be the provider of the affected AI system for AI Act purposes. However, the original provider is still required to cooperate with the new provider — making available all the information, documentation, and technical access the new provider needs to satisfy their Article 16 obligations. This includes access to technical documentation, training data descriptions, the risk management file, and the original conformity assessment documentation. There is an exception: the original provider can specify in advance that their AI system is not intended to be changed into a high-risk system, in which case they are not obligated to cooperate once the change happens. Article 25(5) clarifies that cooperation obligations are without prejudice to intellectual property rights and trade secrets.

Related compliance guides

AI Provider Obligations (Art 16)

The full checklist of 13+ obligations that now apply to you as a provider under Art 25.

Importer & Distributor Obligations (Arts 23–24)

If Art 25 does NOT make you a provider, understand your distributor or importer obligations.

Is My AI System High-Risk?

Article 6 and Annex III classification: determine whether Art 25 is even in play for your system.

Conformity Assessment (Art 43)

What the conformity assessment procedure requires — and when a notified body is mandatory.

GPAI Model Compliance (Arts 51–56)

GPAI providers face separate obligations — understand the boundary between GPAI and high-risk use.

Substantial Modification Explained

What counts as a substantial modification and when it triggers provider status under Art 25.