RegumatrixBeta
GuidesPathfinderAI RightsFreeAbout
Sign inGet Started Free

Reference

  • All Articles
  • Official Text ↗

Compliance Guides

  • Compliance Timeline
  • High-Risk Checklist
  • Healthcare AI
  • HR & Recruitment
  • Financial Services
  • GPAI / Foundation Models
  • View all guides →

Product

  • Risk Pathfinder
  • AI Rights Check
  • Get Started Free
  • About
  • Feedback
  • Contact

Legal

  • Privacy Policy
  • Terms

Regumatrix — AI compliance powered by Regulation (EU) 2024/1689

This tool is informational only and does not constitute legal advice.

Grounded in Regulation (EU) 2024/1689 · verified 4 Apr 2026
HomeComplianceEU AI Act for Education AI

EU AI Act and Education AI

Four types of education AI are classified as high-risk under Annex III point 3. From August 2026, providers who build them and institutions that deploy them face a full set of compliance obligations — or fines up to €15,000,000 / 3% of turnover under Article 99(4).

Annex III — Point 3 · High-Risk€15,000,000 or 3% penalty · Art 99(4)Art 50 chatbot rules in force In force since February 2025

High-risk classification applies from 2 August 2026 — 4 months away

Any AI system that (a) controls admission or assignment to educational institutions, (b) evaluates learning outcomes, (c) assesses the education level an individual will receive, or (d) monitors and detects prohibited student behaviour in tests automatically triggers full high-risk obligations under Articles 9–21 (provider) and Article 26 (deployer). The Art 50 chatbot disclosure rule already applies.

Not sure whether your AI system qualifies as high-risk under Annex III?

Analyse your system

1. The four high-risk education AI system types

Annex III point 3 of the EU AI Act lists four categories of education and vocational training AI systems that are classified as high-risk by law. If an AI system fits any of these descriptions, the classification is automatic — no further risk assessment is required for the category determination itself.

(a) Admission, access and assignment to institutions

Annex III

AI systems used for admission to, access to, or assignment to educational or vocational training institutions at all levels. This covers university application screening algorithms that rank or shortlist candidates, automated placement tests that determine whether an applicant meets entry requirements, and machine-learning systems that route applicants between courses or programmes.

(b) Evaluating learning outcomes

Annex III

AI systems for evaluating learning outcomes of persons, including systems used to steer the learning process of a student. Automated grading systems, AI that scores written essays and feeds those scores into pass/fail decisions, and adaptive learning platforms that adjust an individual's study path or content difficulty based on performance data all fall within this category.

(c) Assessing the appropriate level of education

Annex III

AI systems for assessing the appropriate level of education that an individual will receive or to which they will have access. This includes AI that determines which educational track a student should enter — academic versus vocational, standard versus advanced — or that decides on specialised support needs such as remedial programmes or accelerated pathways.

(d) Monitoring prohibited student behaviour during tests

Annex III

AI systems for monitoring and detecting prohibited behaviour of students during tests. AI-powered exam proctoring systems that watch for gaze deviation, unusual keystrokes, or background noise, and flag candidates for suspected cheating, are the primary use case. These systems are high-risk because a flag can directly lead to disciplinary action or grade invalidation, making accuracy and human oversight critical safeguards.

2. The Article 6(3) derogation: when education AI can escape high-risk

Article 6(3) allows a provider to declare that a system listed in Annex III is not high-risk, but only if it meets all four of the following conditions. The provider must document the assessment in writing before placing the system on the market (Article 6(4)). Failure to produce that documentation on request is itself a compliance violation.

Narrow procedural task only

Art 6(3)(a)

The AI system is intended to perform a narrow procedural task. It assists a human in one well-defined sub-step of a workflow — for example, converting handwritten exam answers to typed text — and does not itself assess quality or make any judgement about the student.

Improves the result of a previously completed human activity

Art 6(3)(b)

The AI system is intended to improve the result of a previously completed human activity. It works on material that a human evaluator has already produced — for example, automatically formatting a teacher's written grade sheet after the teacher has graded — rather than producing a new assessment itself.

Detects decision patterns without replacing human assessment

Art 6(3)(c)

The AI system is not intended to replace or influence a human assessment. It only analyses data to detect patterns — for example, flagging statistical outliers in a cohort's marks for a teacher to review — but makes no recommendation about any individual student.

Preparatory task only — not a sole basis for decision

Art 6(3)(d)

The AI system is intended to prepare an assessment to be carried out by a human and is not a basis for the decision itself. It aggregates raw input data for a teacher or admissions officer to review but never produces a score, grade, or outcome on which anyone relies.

Critical exception: profiling always triggers high-risk

Article 6(3) expressly removes the derogation for any system that involves the profiling of natural persons within the meaning of Article 4(4) of the GDPR. If your system builds individual learner profiles — collecting data on study habits, engagement patterns, aptitude scores, or behaviour to predict future performance, determine suitability, or personalise content at the individual level — the derogation cannot apply, regardless of whether the other four conditions are met.

3. If you build education AI: provider obligations

EdTech companies and other providers who place high-risk education AI systems on the EU market or put them into service must satisfy Articles 9–21 before deployment. The obligations below are the primary requirements that directly apply to education use cases.

Risk management system — continuous and lifecycle-spanning

Art 9

Establish, document, and maintain a risk management system that runs throughout the entire lifecycle of the system — from design to post-market monitoring. The system must identify and analyse known and reasonably foreseeable risks to health, safety, or fundamental rights; estimate risks from intended use and foreseeable misuse; and adopt targeted measures to address them. Article 9(9) specifically requires providers to consider whether their system is likely to have an adverse impact on persons under the age of 18, which is directly relevant to school-age learners.

Data governance — representative, bias-audited datasets

Art 10

Training, validation, and testing datasets must be relevant, sufficiently representative, and as free of errors as possible. Providers must examine datasets for biases that could lead to discrimination under Union law — critical in education because bias in training data (for example, historical admissions patterns) can entrench disadvantage. Article 10(5) permits processing of special categories of personal data for bias detection under strict conditions: no suitable alternative data exists; pseudonymisation is applied; data is not shared with third parties; data is deleted once bias correction is complete.

Technical documentation — before market, kept up to date

Art 11

Draw up technical documentation as set out in Annex IV before placing the system on the market or putting it into service. The documentation must demonstrate compliance with all Section 2 requirements and make the information accessible to national authorities and notified bodies. SMEs and start-ups may use a simplified technical documentation form that the Commission is required to establish.

Automatic logging built into the system

Art 12

The high-risk AI system must technically allow automatic recording of events throughout its lifetime. Logging must capture situations that could indicate a risk, support post-market monitoring, and enable deployers to monitor operation. Providers must retain logs under their control for at least six months (Article 19). For deployers such as universities, the deployer-side log retention obligation runs independently under Article 26(6).

Transparency — instructions for use for deployer institutions

Art 13

Design the system to be sufficiently transparent for deployer institutions to interpret its outputs correctly and use them appropriately. Supply instructions for use in a suitable digital format covering: the intended purpose; accuracy metrics and known limitations; circumstances that could lead to risks to fundamental rights; human oversight requirements; log collection mechanisms; and any pre-determined changes to the system's performance. Institutions need this information to implement Art 26 and meet the FRIA requirements under Art 27.

Human oversight — built-in design requirements

Art 14

Design and develop the system with appropriate human-machine interface tools so that responsible persons at the deploying institution can effectively oversee its operation. The system must enable oversight persons to: understand the system's capabilities and limitations; detect and address anomalies; avoid automation bias when reviewing AI-generated grades or flags; correctly interpret outputs; and, in any situation, override or reverse the system's output, or halt it entirely.

Accuracy, robustness and cybersecurity

Art 15

The system must achieve and maintain an appropriate level of accuracy, robustness, and cybersecurity throughout its lifecycle. Declare the accuracy metrics in the instructions for use. For systems that continue to learn after deployment (for example, adaptive learning platforms), design the system to prevent feedback loops where a biased output influences the next round of training data. Protect against adversarial attacks including data poisoning of the training set and input manipulation designed to cause erroneous outputs.

Quality management, CE marking and conformity assessment

Art 16 / 43

Maintain a documented quality management system (Article 17). Before placing the system on the market, complete the relevant conformity assessment procedure under Article 43 — for most education AI systems not embedded in Annex I regulated products, self-declaration under Annex VI (internal control) is available. Affix the CE marking. Register the system in the EU database under Article 49(1). Keep all documentation for 10 years (Article 18).

4. If your institution uses education AI: deployer obligations

Schools, universities, training providers, and examination bodies that use high-risk education AI systems built by a third-party provider are deployers under Article 3(4). Article 26 and Article 27 set their obligations.

Follow the provider's instructions for use

Art 26(1)

Take appropriate technical and organisational measures to ensure the system is used in line with the instructions provided by the EdTech company. Using a high-risk AI grading system outside its documented intended purpose — for example, applying it to a subject type not covered in the instructions — creates compliance exposure for the institution.

Assign competent human oversight persons

Art 26(2)

Designate natural persons — staff with the necessary competence, training, and authority — to perform human oversight. These persons must be able to understand the system's outputs, detect when the system is malfunctioning or producing biased results, and take corrective action. They must also be empowered to override or disregard AI outputs in individual cases.

Ensure input data is relevant and representative

Art 26(4)

Where the institution controls the input data fed to the high-risk AI system — for example, uploading student records to an admission screening tool — that data must be relevant and sufficiently representative of the intended population. Using a system trained on one demographic with data from a very different student population is a deployer-side compliance risk.

Monitor operation, suspend if risk arises, retain logs ≥ 6 months

Art 26(5–6)

Monitor the system's operation on the basis of the instructions for use. If the system appears to present a risk within the meaning of Article 79(1), suspend its use and immediately notify the provider and the relevant market surveillance authority. Retain all automatically generated logs under the institution's control for at least six months — or longer if required by applicable data protection or national education law.

Inform students when high-risk AI makes or assists decisions about them

Art 26(11)

Deployers of Annex III high-risk systems that make decisions or assist in making decisions relating to natural persons must inform those persons. In practice: when an AI system is used to grade work, determine a student's course track, or flag suspected cheating in a test, the student must be told that AI is involved. This obligation applies without prejudice to the separate Art 50 disclosure rules for AI systems that interact directly with students.

Article 27 — Fundamental Rights Impact Assessment (FRIA)

Article 27(1) makes the FRIA mandatory for bodies governed by public law — including public universities, state schools, and public examination bodies — before first deploying a high-risk AI system in section Article 6(2). Private institutions that deliver a public service of general economic interest are equally covered. The FRIA must document:

  • ·The processes in which the high-risk AI system will be used
  • ·The period of time and frequency of use
  • ·Categories of natural persons likely to be affected — students, applicants
  • ·Specific fundamental rights risks to those persons
  • ·How human oversight measures will be implemented
  • ·Measures to respond if those risks materialise

Once completed, the FRIA must be reported to the relevant market surveillance authority (Article 27(3)). It may be combined with a GDPR Article 35 Data Protection Impact Assessment where applicable.

Art 27

5. Classroom chatbots and AI tutors: Article 50 transparency rules

AI tutoring assistants, homework-help chatbots, and conversational study tools are typically not high-risk under Annex III — they do not evaluate outcomes, determine access, or monitor exams. However, because they interact directly with students, they fall under the Article 50 limited-risk transparency obligations, which have been in force since In force since February 2025.

Disclose AI identity at first interaction

Art 50(1)

Providers must ensure that AI systems designed to interact directly with natural persons — including students — are built to inform users that they are interacting with an AI, unless this is obvious from the context. The disclosure must be made at the latest at the time of the first interaction and must conform to accessibility requirements. An EdTech company deploying a chatbot tutor must build this disclosure into the product. The disclosure cannot be buried in terms and conditions.

Mark AI-generated content in a machine-readable format

Art 50(2)

Providers of AI systems that generate synthetic text, audio, image, or video content must ensure the output is marked in a machine-readable format as artificially generated or manipulated. This applies to AI writing tools, essay feedback generators, and content creation systems used in education. The technical solution must be effective, interoperable, and robust to the extent technically feasible.

Deepfake disclosure — relevant for AI-generated educational media

Art 50(4)

Deployers using AI to generate or manipulate image, audio, or video constituting a deepfake must disclose that the content is artificially generated, at the time of first exposure. Where the content is evidently artistic or fictional — for example, an AI-generated historical re-enactment video used in a history class — only a signal that AI was used is required, and the display of the work need not be disrupted.

Violations of Article 50 carry the same fine ceiling as high-risk obligation breaches: up to €15,000,000 or 3% of global annual turnover under Article 99(4).

Grey-area signals: when to look more carefully

These situations suggest the high-risk track applies and your system — or your use of it — requires closer scrutiny:

  • →An adaptive learning platform adjusts the difficulty level or topic sequence for individual students — if those adjustments determine the student's course pathway or final level, it evaluates learning outcomes and is high-risk
  • →Your university uses an AI tool to screen application essays and assign a score before human review — if that score influences the admission decision, the derogation under Art 6(3) does not apply
  • →A language learning app tracks progress and recommends the next certification level — if it formally assesses the appropriate level of education the user will receive, it falls within Annex III point 3(c)
  • →An AI proctoring tool flags students for exam misconduct using webcam analysis — this is plainly within Annex III point 3(d) and requires full provider certification and deployer oversight
  • →You are a public university that has deployed any Annex III high-risk system without first completing the Article 27 FRIA — the missing FRIA is itself a compliance gap reportable to supervisory authorities
Analyse your system

No changes to Annex III point 3 or to the education AI obligations are proposed under COM(2025) 836 or COM(2025) 837. The four high-risk system categories, the Article 6(3) derogation test, and the Article 26 and 27 deployer obligations remain unchanged in both Digital Omnibus proposals.

Frequently asked questions

Does every AI tool used in a school trigger the Annex III high-risk classification?

No. Only the four specific types listed in Annex III point 3 are high-risk: (a) AI used for admission to, access to, or assignment to an educational institution; (b) AI that evaluates learning outcomes, including adaptive systems that steer the learning process; (c) AI that assesses the appropriate level of education an individual will receive; and (d) AI that monitors and detects prohibited student behaviour during tests. Standard spell-checkers, library search tools, timetabling software, and general office automation are not high-risk. AI tutoring chatbots are typically limited-risk under Article 50, not high-risk, unless they also formally evaluate or determine outcomes that affect a student's access or progression.

Can a university use the Article 6(3) derogation to avoid high-risk classification?

Universities are deployers, not providers — the derogation is assessed by the provider (the EdTech company that built the system) before it is placed on the market. If the provider documents a valid derogation under Article 6(3), the system is not high-risk and the institution can use it without Article 26 high-risk deployer obligations. However, if the system profiles students — creating records of their behaviour, aptitude, or learning patterns to make predictions — the derogation cannot apply regardless of the other four conditions. Article 6(3) expressly carves out profiling of natural persons from the derogation.

When must a school inform students that an AI system is making decisions about them?

Under Article 26(11), deployers of high-risk Annex III systems that make decisions, or assist in making decisions, relating to natural persons must inform those persons. For education, this means that when a high-risk AI system grades work, places a student in a course track, or flags behaviour during an exam, the student must be told. Separately, Article 50(1) requires that any AI system intended to interact directly with students — such as a chatbot tutor — disclose it is an AI, clearly and at the time of first interaction. That Art 50 obligation has been in force since February 2025.

Does a public university need to do a Fundamental Rights Impact Assessment?

Yes. Article 27(1) makes the FRIA mandatory for bodies governed by public law before first deploying a high-risk AI system listed in Article 6(2). Public universities, state schools, and public examination boards are bodies governed by public law. The assessment must document six elements: processes in which the system will be used; time period and frequency; categories of affected persons (students, applicants); specific fundamental rights risks to those persons; how human oversight will be implemented; and measures to address those risks. The completed FRIA must be notified to the relevant market surveillance authority. Private institutions that provide a service of general economic interest — such as private universities receiving state funding — are also covered.

What fine applies if an EdTech company or school breaches EU AI Act education AI obligations?

Article 99(4) applies to violations of the high-risk AI system requirements in Articles 9 through 21 and Articles 26 and 50. The maximum fine is €15,000,000 or 3% of total worldwide annual turnover for the preceding financial year, whichever is higher. For providers, failure to establish a risk management system (Article 9), maintain adequate training-data governance free of discriminatory bias (Article 10), or provide compliant instructions for use (Article 13) are each Art 99(4) violations. For deployer institutions, failure to assign oversight persons (Article 26(2)), to retain logs (Article 26(6)), or to notify students of AI-assisted decisions (Article 26(11)) carries the same fine ceiling. Article 99(6) applies the lower of the fixed amount or the percentage for SMEs.

Related compliance guides

EU AI Act Compliance Hub

Every obligation, deadline, and sector guide in one place

High-Risk AI System Checklist

Step-by-step checklist for Articles 9–21 compliance

AI Provider Obligations

Full guide to Arts 9–21, CE marking, and EU database registration

AI Deployer Obligations

Article 26 checklist and FRIA for institutions using AI tools

EU AI Act and HR & Recruitment

Annex III point 4 obligations for employment AI

EU AI Act Timeline 2025–2030

Every enforcement deadline from February 2025 to August 2030

Ready to map your education AI obligations?

Use Regumatrix to identify which Annex III categories apply to your system, generate a compliance roadmap, and track your progress to the August 2026 deadline.

Get started free