AI Act – Information Flooding and What You Should Do
On August 1, 2024, the regulation for artificial intelligence, known as AI Act entered into force. Digital channels are flooded with information about AI Act and it can be challenging to navigate. Therefore, we have chosen to focus on what we think are the most important aspects and actions to highlight for you who need to comply with the AI Act.
A prerequisite for tackling Artificial Intelligence (AI) successfully is accepting that it requires collaboration and competence in several different specialist areas, whilst enhancing your knowledge and understanding of AI. Furthermore, long-termism and sustainability rather than "AI hype" should be allowed to influence strategy work.
When does AI Act apply?
The main part of the rules will start to apply only after two years, i.e. August 2, 2026. However, there are exceptions, and the first rules apply as soon as February 2025, mainly targeting AI systems that are prohibited. There are also initiatives encouraging organisations to voluntarily start applying the rules early, which is an issue that should be considered as part of the organisations Data/AI strategy as well as market strategy. This initiative is already underway, and more information is expected during September.
The AI Act does not answer all questions about how to proceed or what requirements to apply. Intensive work is already underway to produce guidelines, supplementary regulations, national detailed implementations, etc. Preparations and decision-making processes for establishing supervisory authorities and other functions are also ongoing.
AI systems and AI models
The AI Act regulates so-called AI systems as well as general AI models (eg used in Claude or ChatGTP) (collectively AI) and aims to promote the use of AI while protecting health, safety and fundamental rights.
An AI system is defined as: (a) a machine-based system, that is (b) designed to operate with varying levels of autonomy, and that (c) may exhibit adaptiveness after deployment and that, (d) for explicit or implicit objectives, infers, (i) from the input it receives, (ii) how to generate outputs such as predictions, content, recommendations, or decisions that (e) can influence physical or virtual environments. If the system does not meet the above, it is not an AI system that is regulated by the AI Act.
A general purpose AI model is defined as: (a) an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that (b) displays significant generality and (c) is capable of competently performing a wide range of distinct tasks. AI models that are used for research, development or prototyping activities before they are placed on the market are exempt. Such an AI model can be part of an AI system.
Different types of requirements depending on risk
The AI Act imposes different requirements linked to AI depending on its risk classification. AI with the highest risk, e.g. those used for “social scoring” are prohibited. Those with the lowest risk, e.g. in a spam filter is not subject to any special requirements. There is also AI that is used in so-called chat bots or for generating images, texts, etc. and in these cases, there are additional transparency requirements. It must be clear what type of technology is used, which means that the recipient must understand that they are interacting with AI. The major focus within the AI Act is the so-called high-risk AI, for example when AI is linked to credit scoring or recruitment. However, one should not yet rule out that there is AI in the organisation that, potentially, could be prohibited under the AI Act.
Risk and biases
“Risk” refers to the combination of the likelihood of harm occurring and the seriousness of that harm from a health, safety and fundamental rights perspective. The first two perspectives are common in product safety regulations, while “fundamental rights” are distinctive within the AI Act, which is why the connection to the GDPR becomes more relevant.
“Bias” that can (a) affect fundamental rights or lead to discrimination, (b) tendencies to automatically or to an excessive extent, rely on information from AI (automation bias) must be managed. The same applies to (c) incorrect assessments made by AI which in turn affect the learning AI makes based on its assessment (feedback loops) as well as (d) risk related to cybersecurity.
Human oversight & Operators
The AI Act also requires human oversight of AI. The organisations that provide and deploy AI, need to ensure that their personnel have sufficient training and knowledge of AI in relation to the context in which the AI systems are to be used, and with regard to the persons, or groups of persons, on whom the AI systems are to be used.
The requirements for an organisation also vary depending on whether you develop the AI system or you only use it in your business. The responsibility can further be shifted from a Provider to a Deployer (a Deployer is, for example, an organisation that use AI in its business operations) depending on whether a system is adapted or changed in relation to the Deployer’s business. It is therefore central to consider this in the procurement and IT processes. In short, it is required that, both during development and use, you must always know that the AI is working as intended.
Sector specific regulation, sustainability and connection to the GDPR
Among financial institutions, there is AI that is specific to the sector within e.g. credit risk and credit scoring, fraud/money laundering, robo-advice, algorithmic trading, portfolio management, insurance pricing etc. but also in general cases such as word processing programs and browsers. In the former case, the systems already in place will probably mainly comply with the requirements of the AI Act so that you do not need to implement parallel solutions. There are also connections between GDPR and AI Act which bring both opportunities and challenges. This needs to be analysed and handled, based on the conditions and operations of each individual organisation.
The best AI system is only as good as the data it uses, but the data used in its development also plays a role. Training AI requires both computer power and manpower, which entails a climate footprint and a sustainability impact. In many cases, personal data is used which involves the GDPR to secure the personal data, parallel with the AI Act.
What should we do now?
Artificial Intelligence opens many possibilities, but comes with great responsibilities. We need to ensure that we know what we are doing both regarding human rights, but also regarding the risk which we may expose our companies or organisations The AI Act aims to help us acknowledge that and that is a good thing. The longer we let AI operate without restrictions the harder it will be to implement new routines or processes. There are therefore already several issues that can be addressed. The main questions to consider are:
- What AI systems are in use in our organisation?
- Do an inventory of the AI systems you can find and what they are used for.
- How does the AI Act affect our daily work?
- Ensure that you have an updated AI strategy that gives everyone the right conditions to use AI in a way that doesn’t add risk to the company and that complies with the regulation.
- How do I ensure that my organisation has the required AI skills?
- Use autumn to give everyone the training they need to understand and to be able to use the AI you have within the organisation, in a correct manner. This may even have an efficient outcome on the business.
With the right input at an early stage, you will ease into the requirements of the regulation with much less effort and headache. If you need further guidance or help to address the questions above, Advisense can provide experts from both the legal and the technical side of AI.
Read more about our Data Privacy offering here