SIIA - Software & Information Industry Association

07/21/2025 | Press release | Distributed by Public on 07/21/2025 08:33

SIIA Seeks Clarification on Implementation of EU AI Act’s Rules for High-Risk AI Systems in the Educational Context

SIIA Seeks Clarification on Implementation of EU AI Act's Rules for High-Risk AI Systems in the Educational Context

July 21, 2025
by Staff
Policy

SIIA has submitted comments to the European Commission in response to a consultation seeking clarification in how the EU AI Act treats education technology that incorporates AI.


Background

Annex III of the AI Act includes the following among "high-risk AI systems":

"(a) AI systems intended to be used to determine access or admission or to assign natural persons to educational and vocational training institutions at all levels;

"(b) AI systems intended to be used to evaluate learning outcomes, including when those outcomes are used to steer the learning process of natural persons in educational and vocational training institutions at all levels;

"(c) AI systems intended to be used for the purpose of assessing the appropriate level of education that an individual will receive or will be able to access, in the context of or within educational and vocational training institutions at all levels;

"(d) AI systems intended to be used for monitoring and detecting prohibited behaviour of students during tests in the context of or within educational and vocational training institutions at all levels."

The AI Act also contains a "derogation" in Article 6(3) which exempts from the definition of "high-risk AI system" any system that "does not pose a significant risk of harm to the health, safety or fundamental rights of natural persons, including by not materially influencing the outcome of decision making." The AI Act deems this applicable when any of the following conditions is fulfilled:

"(a) the AI system is intended to perform a narrow procedural task;

"(b) the AI system is intended to improve the result of a previously completed human activity;

"(c) the AI system is intended to detect decision-making patterns or deviations from prior decision-making patterns and is not meant to replace or influence the previously completed human assessment, without proper human review; or

"(d) the AI system is intended to perform a preparatory task to an assessment relevant for the purposes of the use cases listed in Annex III."


SIIA's Comment to the EC on the Scope of High-Risk AI Systems as Applied to Education Technology

We recommend further clarification on the scope of "Education and vocational training" in Annex III in light of the derogation in Article 6(3) and the range of education technology (EdTech) solutions that will prove critical to advancing educational objectives. Specifically, we request that guidelines clarify that paragraphs 3(b) and 3(c) in Annex III, which cover AI systems "used to evaluate learning outcomes" and "used for the purpose of assessing the appropriate level of education" apply to summative assessments and not to adaptive lessons. The guidelines should clarify that AI-based tools designed to assist educators and learners in providing personalized learning and monitor progress are outside the scope of paragraphs 3(b) and 3(c). These tools enable teaching to adapt to individual student paces, learning styles, and knowledge gaps, offering customized content, feedback, and remediation. They are used to evaluate learning outcomes and assess educational progress on an interim and ongoing basis, but they do not present a significant risk of harm to individuals, as defined in Article 6(3). For example, personalized learning tools are "intended to perform a preparatory task to an assessment," as set out in Article 6(3)(d). As noted above, we do not intend this to cover summative assessments, which are used to evaluate student learning at the end of a course. For further background, please refer to this letter from SIIA and the European Edtech Alliance.

SIIA - Software & Information Industry Association published this content on July 21, 2025, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on July 21, 2025 at 14:33 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at support@pubt.io