Four types of education AI are classified as high-risk under Annex III point 3. From August 2026, providers who build them and institutions that deploy them face a full set of compliance obligations — or fines up to €15,000,000 / 3% of turnover under Article 99(4).
High-risk classification applies from 2 August 2026 — 4 months away
Any AI system that (a) controls admission or assignment to educational institutions, (b) evaluates learning outcomes, (c) assesses the education level an individual will receive, or (d) monitors and detects prohibited student behaviour in tests automatically triggers full high-risk obligations under Articles 9–21 (provider) and Article 26 (deployer). The Art 50 chatbot disclosure rule already applies.
Not sure whether your AI system qualifies as high-risk under Annex III?
Analyse your systemAnnex III point 3 of the EU AI Act lists four categories of education and vocational training AI systems that are classified as high-risk by law. If an AI system fits any of these descriptions, the classification is automatic — no further risk assessment is required for the category determination itself.
AI systems used for admission to, access to, or assignment to educational or vocational training institutions at all levels. This covers university application screening algorithms that rank or shortlist candidates, automated placement tests that determine whether an applicant meets entry requirements, and machine-learning systems that route applicants between courses or programmes.
AI systems for evaluating learning outcomes of persons, including systems used to steer the learning process of a student. Automated grading systems, AI that scores written essays and feeds those scores into pass/fail decisions, and adaptive learning platforms that adjust an individual's study path or content difficulty based on performance data all fall within this category.
AI systems for assessing the appropriate level of education that an individual will receive or to which they will have access. This includes AI that determines which educational track a student should enter — academic versus vocational, standard versus advanced — or that decides on specialised support needs such as remedial programmes or accelerated pathways.
AI systems for monitoring and detecting prohibited behaviour of students during tests. AI-powered exam proctoring systems that watch for gaze deviation, unusual keystrokes, or background noise, and flag candidates for suspected cheating, are the primary use case. These systems are high-risk because a flag can directly lead to disciplinary action or grade invalidation, making accuracy and human oversight critical safeguards.
Article 6(3) allows a provider to declare that a system listed in Annex III is not high-risk, but only if it meets all four of the following conditions. The provider must document the assessment in writing before placing the system on the market (Article 6(4)). Failure to produce that documentation on request is itself a compliance violation.
The AI system is intended to perform a narrow procedural task. It assists a human in one well-defined sub-step of a workflow — for example, converting handwritten exam answers to typed text — and does not itself assess quality or make any judgement about the student.
The AI system is intended to improve the result of a previously completed human activity. It works on material that a human evaluator has already produced — for example, automatically formatting a teacher's written grade sheet after the teacher has graded — rather than producing a new assessment itself.
The AI system is not intended to replace or influence a human assessment. It only analyses data to detect patterns — for example, flagging statistical outliers in a cohort's marks for a teacher to review — but makes no recommendation about any individual student.
The AI system is intended to prepare an assessment to be carried out by a human and is not a basis for the decision itself. It aggregates raw input data for a teacher or admissions officer to review but never produces a score, grade, or outcome on which anyone relies.
Critical exception: profiling always triggers high-risk
Article 6(3) expressly removes the derogation for any system that involves the profiling of natural persons within the meaning of Article 4(4) of the GDPR. If your system builds individual learner profiles — collecting data on study habits, engagement patterns, aptitude scores, or behaviour to predict future performance, determine suitability, or personalise content at the individual level — the derogation cannot apply, regardless of whether the other four conditions are met.
EdTech companies and other providers who place high-risk education AI systems on the EU market or put them into service must satisfy Articles 9–21 before deployment. The obligations below are the primary requirements that directly apply to education use cases.
Establish, document, and maintain a risk management system that runs throughout the entire lifecycle of the system — from design to post-market monitoring. The system must identify and analyse known and reasonably foreseeable risks to health, safety, or fundamental rights; estimate risks from intended use and foreseeable misuse; and adopt targeted measures to address them. Article 9(9) specifically requires providers to consider whether their system is likely to have an adverse impact on persons under the age of 18, which is directly relevant to school-age learners.
Training, validation, and testing datasets must be relevant, sufficiently representative, and as free of errors as possible. Providers must examine datasets for biases that could lead to discrimination under Union law — critical in education because bias in training data (for example, historical admissions patterns) can entrench disadvantage. Article 10(5) permits processing of special categories of personal data for bias detection under strict conditions: no suitable alternative data exists; pseudonymisation is applied; data is not shared with third parties; data is deleted once bias correction is complete.
Draw up technical documentation as set out in Annex IV before placing the system on the market or putting it into service. The documentation must demonstrate compliance with all Section 2 requirements and make the information accessible to national authorities and notified bodies. SMEs and start-ups may use a simplified technical documentation form that the Commission is required to establish.
The high-risk AI system must technically allow automatic recording of events throughout its lifetime. Logging must capture situations that could indicate a risk, support post-market monitoring, and enable deployers to monitor operation. Providers must retain logs under their control for at least six months (Article 19). For deployers such as universities, the deployer-side log retention obligation runs independently under Article 26(6).
Design the system to be sufficiently transparent for deployer institutions to interpret its outputs correctly and use them appropriately. Supply instructions for use in a suitable digital format covering: the intended purpose; accuracy metrics and known limitations; circumstances that could lead to risks to fundamental rights; human oversight requirements; log collection mechanisms; and any pre-determined changes to the system's performance. Institutions need this information to implement Art 26 and meet the FRIA requirements under Art 27.
Design and develop the system with appropriate human-machine interface tools so that responsible persons at the deploying institution can effectively oversee its operation. The system must enable oversight persons to: understand the system's capabilities and limitations; detect and address anomalies; avoid automation bias when reviewing AI-generated grades or flags; correctly interpret outputs; and, in any situation, override or reverse the system's output, or halt it entirely.
The system must achieve and maintain an appropriate level of accuracy, robustness, and cybersecurity throughout its lifecycle. Declare the accuracy metrics in the instructions for use. For systems that continue to learn after deployment (for example, adaptive learning platforms), design the system to prevent feedback loops where a biased output influences the next round of training data. Protect against adversarial attacks including data poisoning of the training set and input manipulation designed to cause erroneous outputs.
Maintain a documented quality management system (Article 17). Before placing the system on the market, complete the relevant conformity assessment procedure under Article 43 — for most education AI systems not embedded in Annex I regulated products, self-declaration under Annex VI (internal control) is available. Affix the CE marking. Register the system in the EU database under Article 49(1). Keep all documentation for 10 years (Article 18).
Schools, universities, training providers, and examination bodies that use high-risk education AI systems built by a third-party provider are deployers under Article 3(4). Article 26 and Article 27 set their obligations.
Take appropriate technical and organisational measures to ensure the system is used in line with the instructions provided by the EdTech company. Using a high-risk AI grading system outside its documented intended purpose — for example, applying it to a subject type not covered in the instructions — creates compliance exposure for the institution.
Designate natural persons — staff with the necessary competence, training, and authority — to perform human oversight. These persons must be able to understand the system's outputs, detect when the system is malfunctioning or producing biased results, and take corrective action. They must also be empowered to override or disregard AI outputs in individual cases.
Where the institution controls the input data fed to the high-risk AI system — for example, uploading student records to an admission screening tool — that data must be relevant and sufficiently representative of the intended population. Using a system trained on one demographic with data from a very different student population is a deployer-side compliance risk.
Monitor the system's operation on the basis of the instructions for use. If the system appears to present a risk within the meaning of Article 79(1), suspend its use and immediately notify the provider and the relevant market surveillance authority. Retain all automatically generated logs under the institution's control for at least six months — or longer if required by applicable data protection or national education law.
Deployers of Annex III high-risk systems that make decisions or assist in making decisions relating to natural persons must inform those persons. In practice: when an AI system is used to grade work, determine a student's course track, or flag suspected cheating in a test, the student must be told that AI is involved. This obligation applies without prejudice to the separate Art 50 disclosure rules for AI systems that interact directly with students.
Article 27(1) makes the FRIA mandatory for bodies governed by public law — including public universities, state schools, and public examination bodies — before first deploying a high-risk AI system in section Article 6(2). Private institutions that deliver a public service of general economic interest are equally covered. The FRIA must document:
Once completed, the FRIA must be reported to the relevant market surveillance authority (Article 27(3)). It may be combined with a GDPR Article 35 Data Protection Impact Assessment where applicable.
AI tutoring assistants, homework-help chatbots, and conversational study tools are typically not high-risk under Annex III — they do not evaluate outcomes, determine access, or monitor exams. However, because they interact directly with students, they fall under the Article 50 limited-risk transparency obligations, which have been in force since In force since February 2025.
Providers must ensure that AI systems designed to interact directly with natural persons — including students — are built to inform users that they are interacting with an AI, unless this is obvious from the context. The disclosure must be made at the latest at the time of the first interaction and must conform to accessibility requirements. An EdTech company deploying a chatbot tutor must build this disclosure into the product. The disclosure cannot be buried in terms and conditions.
Providers of AI systems that generate synthetic text, audio, image, or video content must ensure the output is marked in a machine-readable format as artificially generated or manipulated. This applies to AI writing tools, essay feedback generators, and content creation systems used in education. The technical solution must be effective, interoperable, and robust to the extent technically feasible.
Deployers using AI to generate or manipulate image, audio, or video constituting a deepfake must disclose that the content is artificially generated, at the time of first exposure. Where the content is evidently artistic or fictional — for example, an AI-generated historical re-enactment video used in a history class — only a signal that AI was used is required, and the display of the work need not be disrupted.
Violations of Article 50 carry the same fine ceiling as high-risk obligation breaches: up to €15,000,000 or 3% of global annual turnover under Article 99(4).
These situations suggest the high-risk track applies and your system — or your use of it — requires closer scrutiny:
No changes to Annex III point 3 or to the education AI obligations are proposed under COM(2025) 836 or COM(2025) 837. The four high-risk system categories, the Article 6(3) derogation test, and the Article 26 and 27 deployer obligations remain unchanged in both Digital Omnibus proposals.
No. Only the four specific types listed in Annex III point 3 are high-risk: (a) AI used for admission to, access to, or assignment to an educational institution; (b) AI that evaluates learning outcomes, including adaptive systems that steer the learning process; (c) AI that assesses the appropriate level of education an individual will receive; and (d) AI that monitors and detects prohibited student behaviour during tests. Standard spell-checkers, library search tools, timetabling software, and general office automation are not high-risk. AI tutoring chatbots are typically limited-risk under Article 50, not high-risk, unless they also formally evaluate or determine outcomes that affect a student's access or progression.
Universities are deployers, not providers — the derogation is assessed by the provider (the EdTech company that built the system) before it is placed on the market. If the provider documents a valid derogation under Article 6(3), the system is not high-risk and the institution can use it without Article 26 high-risk deployer obligations. However, if the system profiles students — creating records of their behaviour, aptitude, or learning patterns to make predictions — the derogation cannot apply regardless of the other four conditions. Article 6(3) expressly carves out profiling of natural persons from the derogation.
Under Article 26(11), deployers of high-risk Annex III systems that make decisions, or assist in making decisions, relating to natural persons must inform those persons. For education, this means that when a high-risk AI system grades work, places a student in a course track, or flags behaviour during an exam, the student must be told. Separately, Article 50(1) requires that any AI system intended to interact directly with students — such as a chatbot tutor — disclose it is an AI, clearly and at the time of first interaction. That Art 50 obligation has been in force since February 2025.
Yes. Article 27(1) makes the FRIA mandatory for bodies governed by public law before first deploying a high-risk AI system listed in Article 6(2). Public universities, state schools, and public examination boards are bodies governed by public law. The assessment must document six elements: processes in which the system will be used; time period and frequency; categories of affected persons (students, applicants); specific fundamental rights risks to those persons; how human oversight will be implemented; and measures to address those risks. The completed FRIA must be notified to the relevant market surveillance authority. Private institutions that provide a service of general economic interest — such as private universities receiving state funding — are also covered.
Article 99(4) applies to violations of the high-risk AI system requirements in Articles 9 through 21 and Articles 26 and 50. The maximum fine is €15,000,000 or 3% of total worldwide annual turnover for the preceding financial year, whichever is higher. For providers, failure to establish a risk management system (Article 9), maintain adequate training-data governance free of discriminatory bias (Article 10), or provide compliant instructions for use (Article 13) are each Art 99(4) violations. For deployer institutions, failure to assign oversight persons (Article 26(2)), to retain logs (Article 26(6)), or to notify students of AI-assisted decisions (Article 26(11)) carries the same fine ceiling. Article 99(6) applies the lower of the fixed amount or the percentage for SMEs.
EU AI Act Compliance Hub
Every obligation, deadline, and sector guide in one place
High-Risk AI System Checklist
Step-by-step checklist for Articles 9–21 compliance
AI Provider Obligations
Full guide to Arts 9–21, CE marking, and EU database registration
AI Deployer Obligations
Article 26 checklist and FRIA for institutions using AI tools
EU AI Act and HR & Recruitment
Annex III point 4 obligations for employment AI
EU AI Act Timeline 2025–2030
Every enforcement deadline from February 2025 to August 2030
Use Regumatrix to identify which Annex III categories apply to your system, generate a compliance roadmap, and track your progress to the August 2026 deadline.
Get started free