All that glitters is not AI – check if the AI Act applies to you
Home / Articles / Technology, IT and AI / All that glitters is not AI – check if the AI Act applies to you
AI Act

All that glitters is not AI – check if the AI Act applies to you

There's been a lot of commotion around the AI Act lately, but do you really need to be cocerned? Here are five key points you should check.
Published: 19.04.24

After lengthy negotiations, the EU came to an agreement with itself – the regulation on artificial intelligence, the "AI Act", was passed on 13 March 2024. Although this means that we now know more about the new rules, there are still many questions that remains unresolved. Among other things, the final text of the regulation has not yet been published, and we are still waiting for guidelines to clarify the many unclear areas of the regulation. This makes it difficult for Norwegian and European businesses to know what to deal with.

To help get you started, we have created a five-point checklist:

1. Is it AI?

All that glitters is not gold, and technology is no exception. The AI Act applies to systems that are "artificially intelligent", and thus not all forms of technology. In the regulation, artificial intelligence is defined as a machine-based system that can operate with a minimum of autonomy, and that may also exhibit adaptiveness after deployment. But most importantly, the system should be able to receive information (input) that it processes and uses to generate its answers (output), in a way that goes beyond ordinary statistical analysis.

If the technology you use does not fall within this definition, you do not have to comply with the AI Act.

2. Are you within the scope of the AI Act?

The AI Act does not apply to all usecases of AI. There are some exceptions, such as AI developed solely for scientific research purposes, testing and development of AI before it is marketed in the EU, and, perhaps surprisingly, AI for military purposes.

3. What risk level applies to your business?

Most duties and requirements imposed by the AI Act depend on how much risk the AI entails.

This risk-based approach means, for example, that AI that infringes on people's fundamental rights is prohibited. The purpose of the prohibitions is to protect fundamental human rights. The primary means for this protection, is to ensure that significant decisions about individuals are made by humans and not AI. Certain forms of AI are considered so invasive to individuals that they are banned altogether.

Systems that can affect fundamental rights are considered high-risk AI. The same applies to AI used in security systems, for example in critical infrastructure. Such systems are subject to several different requirements. These apply to suppliers, importers, distributors or other third parties. That is why it is crucial to find out if your AI is classified as high-risk.

For certain systems, more specific requirements are imposed. There may be a requirement for systems that interact directly with humans to disclose that they are AI-based, and that text, film and images created by AI must be "watermarked" as such.

If your AI does not fall into the prohibited or high-risk category and is not subject to any of the system specific requirements, you "only" need to adhere to the same regulations as you normally do.

4. Do you use general purpose AI, and what role do you have in its use?

The obligations imposed on prohibited systems, high-risk AI and AI with specific requirements are largely related to the purpose for which the system is used. But what about systems that do not have one specific purpose, but where the same system can solve many different tasks? And where some of those purposes could be high-risk, like certain tasks in HR, while others are completely harmless, like improved spell checking. This applies, for example, to the large language models and generative AI models that we know today.

These types of general purpose AI became subject to negotiations prior to the adoption of the regulation. The result was more specific rules applying to «general purpose AI" ("GPAI"), i.e. AI with several possible purposes. The regulation imposes requirements on providers of GPAI and gives users and end users a right to a great deal of information about the AI-system. It's worth noting that you may be classified as a provider of GPAI, for example when using OpenAI's API to build your own, general AI system.

The distinction between a system that is high-risk and one that is not, also applies to GPAI, imposing stricter requirements on GPAI systems classified as high-risk.

5. Get an overview of the requirements that applies to your risk class or your role when using GPAI

When you know which of the risk category your AI system belongs to, and whether you are a provider of GPAI, you can identify your obligations.

The EU is planning to issue more guidance on how to fulfil the obligations of the regulation and that technical standards will be made, detailing the requirements systems will have to comply with. Bull will write more articles about this subject in the future, which also will be published on Digi.no.

When will these requirements take effect in Norway?

Norwegian politicians have stated that the AI Act will be adopted in Norway at the same time as in the EU, if possible. Thus, the ban on certain forms of AI will take effect 6 months after the announcement of the act, the requirements for general AI will apply 12 months after the announcement and the requirements for high-risk systems after 36 months.

How can we help?

In need of legal assistance? Call or email us, and we'll figure out how we can help.