Update on EU's AI Act: EU policymakers Have Proposed Stricter Regulations for High-Risk AI Systems
EU policymakers are planning to have changes to the AI Act, aimed at regulating Artificial Intelligence in a risk-based approach. The core of this legislation is to ensure the safety and protection of fundamental rights when it comes to high-risk AI systems.
In the original proposal, certain AI solutions were automatically categorized as high-risk, but recent discussions introduced exemption conditions to allow AI developers to avoid this classification. However, the European Parliament’s legal office expressed concerns that this approach might lead to legal uncertainty and not align with the AI Act’s objectives.
Nonetheless, the latest version of the text, released by EU Parliament’s co-rapporteurs, maintains horizontal exemption conditions. The criteria have been refined and clarified to provide better guidance.
These criteria include:
📌When the AI system is designed for a narrow procedural task, like transforming unstructured data or classifying documents.
📌If the AI solution enhances or reviews the outcomes of a human activity, adding an extra layer to human work, for example, improving language in a document.
📌AI systems intended to detect decision-making patterns or inconsistencies, like grading patterns of teachers.
📌AI models used for preparatory tasks, reducing the system’s risk impact, such as file-handling software.
Additionally, any AI system involved in people’s profiling is still considered high-risk.
Market surveillance authorities will play a crucial role in evaluating AI systems and ensuring compliance. They can impose fines if an AI provider misclassifies their system with the AI law.
👉🏻The European Commission can update the criteria based on technological developments or changes to the list of critical use cases, but only if there’s concrete evidence that high-risk AI systems don’t pose significant risks and the changes don’t decrease overall protection.