What’s in the EU’s AI Act

A ‘global first’ in regulating AI

Catherine Breslin
5 min readJan 23, 2024
Photo by Alexey Larionov on Unsplash

On Dec 9th, the EU agreed the shape and outline of the new EI AI Act. The exact text of the act is still to be finalised, and it should come into effect between 2024 and 2026.

This post comes with a caveat — I’m a technologist and not a lawyer. Also, while a draft version of the act was published earlier in 2023, the full text of the final act isn’t published yet. I’m relying on sources listed at the bottom of this post. So, please consult a lawyer if you need to comply with the act!

Defining AI

The act applies to AI software built or sold within the EU. Legislators are concerned about the harm that may be done with poorly conceived AI in specific applications and domains.

The definition of AI that’s used in the act is said to align with the OECD’s latest definition. This defines AI as a system which infers from inputs how to generate outputs (predictions, content, recommendations or decisions). The objectives of the system may be explicit (directly programmed by a human) or implicit (learnt).

This is a broad definition of AI that covers modern deep learning, but also expert systems and other ML algorithms.

Banned Applications

The EU seeks to ban certain applications of AI outright. Those include:

  • “biometric categorisation systems that use sensitive characteristics (e.g. political, religious, philosophical beliefs, sexual orientation, race);
  • untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases;
  • emotion recognition in the workplace and educational institutions;
  • social scoring based on social behaviour or personal characteristics;
  • AI systems that manipulate human behaviour to circumvent their free will;
  • AI used to exploit the vulnerabilities of people (due to their age, disability, social or economic situation).”

High Risk Applications

The next category of risk is high-risk applications. These are high risk “due to their significant potential harm to health, safety, fundamental rights, environment, democracy and the rule of law”, and have specific obligations around areas like transparency, data quality and documentation.

Limited Risk Applications

Applications that are low risk, such as chatbots, would be subject to only light transparency requirements. For example, they might need to disclose that the user is interacting with AI.

General Purpose AI

The first draft of the AI act came before some of the newer LLMs like ChatGPT which span domains. So, a General Purpose AI category was subsequently included in the act, and this is targeted at models like ChatGPT which have no single purpose. According to Fortune, general purpose AI — or Foundation models — will be subject to transparency requirements unless they are free & open-source. When models are capable enough to pose a ‘systemic risk’, then there are additional obligations, and the open-source opt-out no longer applies. These additional obligations are around reporting, security and testing.

In both the EU AI Act and the US’s recent executive order, the amount of computing power for training a model is used as a metric to assess whether a model is ‘big enough’ to pose a systemic risk.

Consequences

Fines would be 7% of a company’s global annual turnover if they violate the banned AI applications, 3% of turnover for violating the AI act’s obligations, and 1.5% for supplying incorrect information.

Exceptions

Exceptions are made in the act for military and defence purposes. Also, for systems where the sole purpose is research and innovation. There are some extra specific exceptions for law enforcement, particularly around urgent use and in the prevention of certain crimes.

Thoughts

  • The definition of AI used in this act is broad and it could feasibly cover a wide range of software. For example, a bank taking input data about your financial situation and making a rule-based decision about whether you’re eligible for a mortgage could fall under the definition. This would make the act broadly enforceable, even across companies who don’t believe they’re doing AI. April’s earlier draft version of the act used a more specific definition of AI based on a list of algorithms and techniques, which I think would have been harder to implement. Still, this goes to show the difficulty of carving AI out into its own category that’s distinct from software.
  • The separation by category of risk acknowledges that some products are inherently more risky than others. High-risk applications in the early draft included areas like education and hiring, because of the outsized impact these have on people’s lives. However, it’s worth asking what insight we currently have into performance of the human processes that govern these sectors. We don’t know that AI is less fair than a human process. In these high-risk sectors, both human and automated processes should be held to high standards.
  • Testing is a big part of ensuring that software systems work as intended, and AI is no different. Testing is a huge part of building a reliable AI system. Yet, while there is mention of testing and robustness obligations, the details are light. We’re entering a time where building a high-risk application on top of someone else’s API or open source model is relatively straightforward. However, testing that these applications actually work is going to be time-consuming and expensive in comparison.
  • The act places high emphasis on transparency and reporting with regards to data and documentation. That may help to build trust. Yet, just knowing about the training data isn’t nearly enough to determine whether a system works as intended. For that, you need a lot more focus on testing.
  • By nature, the regulation is more backwards- than forwards-looking. General Purpose AI was already added in as an extra category between the initial and final draft, to account for advances in the field. We don’t know what new advances — e.g. in small language models or end-to-end models — will change the state of play in 2024 and beyond.

Clarity on AI regulation is welcome. Pulling this legislation together was clearly no easy task in a field as fast moving as AI. When the final text emerges and the gaps are filled, it will become easier to understand the exact obligations that companies have with regards to their software.

--

--

Catherine Breslin
Catherine Breslin

Written by Catherine Breslin

Machine Learning scientist & consultant :: voice and language tech :: powered by coffee :: www.catherinebreslin.co.uk

No responses yet