AI Act
The AI Act went into force on the 1st of August 2024, now requires all AI systems in the EU to have a risk classification.
EU Implements AI Act – What does it mean for those who develop, offer and use AI solutions?
The EU’s new regulation for artificial intelligence, known as the AI Act, went into force on the 1st of August this year. This means that all AI systems placed on the market or put into use in the EU must now have a risk classification. This legislation marks a major step towards regulating artificial intelligence with the aim of ensuring responsible use and protecting fundamental rights. But what exactly does this classification entail, and what does it mean for developers and users of AI?
What is an AI system
According to the AI-act, an “‘AI system’ is a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”
Put more simply, an AI system is a computer program designed to perform tasks that usually require human intelligence, such as learning from data, making decisions, and solving problems, with the ability to operate on its own and improve over time.
Classification of AI systems
The AI Act divides AI systems into four main risk categories:
1. Unacceptable risk: These systems are prohibited in the European Union. Examples include AI systems that manipulate human behavior in harmful ways, as well as social scoring systems used by governments.
2. High risk: AI systems that affect critical areas such as health, education, law enforcement, and employment. These systems must meet strict requirements for transparency, security, and dataset quality.
3. Limited risk: AI systems that require specific transparency requirements, such as chatbots that must inform users that they are interacting with an AI.
4. Minimal risk: AI systems that pose low risk, such as certain games and spam filters. These do not require specific measures.
High-risk AI systems typically include
Healthcare: Diagnostic tools and medical imaging systems that can affect patient care.
Education and employment: Systems used to evaluate students or job applicants, which can have significant implications on individuals’ futures.
Public safety: Facial recognition technology used in real-time in public places and predictive policing that can affect privacy and rights.
Financial sector: Credit rating systems and automated trading systems that can affect financial stability and consumer rights.
The importance of high-risk classification
When an AI system is classified as high risk, this means that providers and users must comply with a number of requirements to ensure that the system operates securely and fairly. These requirements are as follows:
1. Conformity assessment: The system must undergo a thorough assessment before it can be put into operation. This includes testing and evaluation to ensure that it meets all regulatory requirements.
2. Transparency and documentation: Comprehensive documentation and technical description of the system’s design, development and performance are required, as well as information to users about how the system works.
3. Registration and marking: AI systems must be registered and marked according to EU regulations, including CE marking to indicate compliance.
4. Human monitoring: Ensure human monitoring of the AI systems to minimize risks to health, safety, or fundamental rights. The monitoring must be adapted to the risks, the level of autonomy and the context the AI system will be used in.
5. Risk management: Ensure that there is a risk management system in place that includes identifying and mitigating risks associated with the AI system’s use.
6. Data quality: Use high-quality data for training, validating, and testing the AI systems to ensure that they are functioning as intended and safe, and that they do not become a source of discrimination.
7. Security and robustness: Ensure that AI systems are robust and secure against attempts at manipulation or unauthorized modifications, including measures against cyber attacks.
8. Quality Management System: Implement a quality management system that includes procedures for design, development, quality control, and risk management
The implementation of the AI Act represents an important milestone in the regulation of artificial intelligence in Europe. With its focus on the classification and follow-up of AI systems based on risk, the legislation is a major step towards ensuring that the development and use of AI takes place in a way that protects health, safety and fundamental rights. For providers and users of AI, this means new requirements and guidelines that must be followed to ensure compliance and responsible use of a technology that is becoming increasingly integrated into our lives.
Please feel free to contact us, we can provide experts from both the legal and the technical side of AI.
The AI Act – An Introduction
The European Union (EU) Artificial Intelligence, AI Act will become one of the most comprehensive legal frameworks for AI globally. This article delves into the anticipated implications of the AI Act, and how firms can adapt to the changing regulatory landscape.
AI Act – Information Flooding and What You Should Do
With the aim to foster innovation and ensure safe use of AI, the AI Act will have potentially huge impacts on firms leveraging AI for value creation. In this article we have chosen to focus on what we think are the most important aspects and actions to highlight for you who need to comply with the AI Act.
Contact
SWEDEN
NORWAY