Back to Blog

The EU’s AI Act: An Overview of Three Versions of the Law

Published on: 2023-6-27 The EU’s AI Act: An Overview of Three Versions of the Law

The EU AI Act will bring about a new era of AI regulation impacting organisations in Europe and beyond.

The European Commission’s initial proposal included a broad definition of “AI”, strict rules on the use of “high-risk” AI systems, and higher fines than exist even under the EU General Data Protection Regulation (GDPR).

The EU’s Council and Parliament have now proposed new versions of the law. These institutions each have different visions around which AI systems will be most tightly regulated, which AI practices will be banned, and what the maximum fines should be.

This article will help you understand how these three versions of the AI Act look—and what needs to happen before the law can finally pass.

How the AI Act Is Made

The EU’s legislative process requires each of the EU’s three lawmaking institutions to adopt a “common position” on the law:

  • The European Commission: The EU’s executive body, which initially proposed the regulation.
  • The Council of the European Union, which hosts government ministers from each EU member state.
  • The European Parliament, which consists of directly elected representatives from across the EU’s member states.

Each institution has now agreed on its respective view of what the AI Act should contain.

As such, the final stage of the EU’s legislative procedure (known as the “trilogue”) can begin. The trilogue is a negotiation between the Council and the Parliament, mediated by the Commission.

Overview of the Proposed AI Act

The Commission put forward the initial proposal for the AI Act. The Council and the Parliament versions of the law are, essentially, amendments to the Commission version.

Here are some of the important elements of the AI Act proposal.

Risk-Based Approach

The AI Act is built around a “risk-based” approach to regulating AI. Each type of AI system or use case will fall within one of the following four risk categories.

  • Unacceptable risk (“prohibited practices”): Illegal to develop, deploy, or use in the EU
  • High risk: Require strong safeguards and in-depth risk assessment (“conformity assessment”).
  • Limited risk: Minimal obligations, mostly around transparency.
  • Minimal or no risk: No new rules under the AI Act.

The AI Act’s rules and obligations vary according to an AI system’s risk level. Who Would Be Covered By the AI Act?

The AI Act will apply “extraterritorially”, meaning that non-EU organisations will be covered by the law under certain conditions.

(And note that EU law applies across the European Economic Area, or “EEA”, which includes Iceland, Lichtenstein, and Norway).

The AI Act applies differently to different people. The law categorises organisations according to their position in the AI supply chain or “lifecycle”.

  • Provider: A person or organisation that develops an AI system—or has an AI system developed on its behalf, with a view to putting its AI system “on the market” or “into service” in the EU under the provider’s own name or trademark.
  • User: An organisation using an AI system for anything other than “personal non-professional activity”,
  • Importer: An organisation established in the EU that makes available an AI system produced by an organisation not established in the EU—if the AI system remains under the non-EU organisation’s name or trademark.
  • Distributor: An organisation that makes an AI system available in the EU “without affecting its properties”. A distributor is not an importer or producer of the AI system.

Collectively, the above types of entities are known as “operators”.

AI Act: Commission vs Council vs Parliament

As noted, the Council and the Parliament have each agreed on a respective ideal version of the AI Act, known as a “common position” (the Council calls its common position a “general approach”).

We’ll now look at how the Commission, the Council, and the Parliament have approached some important areas of the AI Act.

Definition of ‘AI’

The AI Act’s definition of an “AI system” will determine what sorts of software and systems—and, therefore, which organisations—fall under the law’s scope.

The institutions disagree on how broad this definition should be.

Commission

The Commission proposal lists the following three types of approaches to AI in Annex I of the law:

  • Machine learning approaches (including supervised, unsupervised, reinforcement, and deep learning)
  • Logic- and knowledge-based approaches (including knowledge representation, inductive programming, knowledge bases, influence and deductive engines, symbolic reasoning, and expert systems)
  • Statistical approaches, Bayesian estimation, and search and optimisation methods

The AI Act would give the Commission the power to amend or expand this list of AI approaches.

An “AI system” is any software that:

  • Is developed using one or more of the above approaches
  • Relies on a set of human-defined objectives
  • Can generate outputs such as content, predictions, recommendations, or decisions “influencing the environments they interact with”

This is a broad definition that would capture many types of software. But remember that not all AI systems would be subject to legal obligations under the AI Act.

Council and Parliament: AI Definition

The Council and Parliament both propose a narrower definition of “AI system” than the Commission.

Both bodies want to remove Annex I of the AI Act, which contains the Commission’s list of “AI approaches”. They provide a definition of an “AI system” that largely corresponds with that of the Organisation for Economic Co-operation and Development (OECD).

The Council and the Parliament’s AI definitions both focus on “machine learning”. Either body’s definition would exclude certain types of software that would qualify as “AI systems” in the Commission’s proposal.

Prohibited Practices

Some uses of AI are prohibited under the AI Act. The EU’s three lawmaking bodies each have different views about which AI practices should be banned.

Subliminal Techniques

The Commission and Council versions of the AI Act would ban AI systems that deploy “subliminal techniques” to “materially distort a person’s behaviour” in harmful ways.

The Parliament would expand the prohibition to include a system that deploys “purposefully manipulative or deceptive techniques” that could impair a person’s ability to “make an informed decision”.

Exploiting Vulnerable People

The Commission draft would ban AI systems that cause harm by exploiting vulnerabilities among “a specific group of persons due to their age, physical or mental disability.”

The Council and Parliament would both expand the prohibition to cover exploitation based on a person or group’s socio-economic status, with the Parliament also banning exploitation based on a person or group’s “known or predicted personality traits”.

Biometric Categorisation

Under the Commission and Council versions of the law, ”biometric categorisation systems”—which group people according to biometric information about them—are treated as limited risk systems and would only be subject to transparency obligations.

The Parliament text would go further, banning biometric categorisation systems that group people based on sensitive or protected characteristics.

Social Scoring

The AI Act deals with “social scoring systems”: AI systems that evaluate a person’s “trustworthiness” by analysing or predicting their behaviour or personality traits. Such systems would be banned if they result in certain forms of detrimental or unfavourable treatment.

But while the Commission text only bans social scoring systems being used or developed by public authorities or their contractors, the Council and the Parliament would extend the ban to the private sector.

Biometric Identification Systems

The three institutions each take different positions on how the AI Act should treat “real-time remote biometric identification systems in publicly accessible spaces”, such as live facial recognition cameras.

The Commission proposal would only prohibit the use of real-time biometric identification systems by law enforcement authorities—except in the context of:

  • Searching for the victims of crime
  • Preventing a “specific, substantial and imminent threat” to people’s lives or safety
  • Preventing a terror attack
  • Searching for people suspected of certain serious crimes as defined under Article 2 (2) of the European Arrest Warrant decision

The Council version of the prohibition is similar to the Commission’s, except that it expands on the exceptions to allow for such systems in the context of preventing attacks on critical infrastructure and threats to “health”, and to enable searches for a broader range of criminal suspects.

The Parliament takes the strictest position—its version of the law would ban any use of real-time remote biometric identification systems in publicly accessible spaces.

Predictive Policing

The Parliament version of the AI Act introduces a ban on so-called “predictive policing” (a concept explored in the Philip K Dick novella “Minority Report” and its 2002 movie adaptation).

This provision would ban AI systems designed to predict whether a person might commit a crime or other offence based on their personality, location, or criminal history.

High-Risk AI Systems

The AI Act designates certain AI systems (and uses of AI) as “high-risk”.

High-risk AI systems are not banned, but their development and use are heavily restricted—and only permitted subject to a “conformity assessment”.

At the highest level, there are two types of high-risk AI systems. The Commission proposal lists the types of high-risk AI systems in the law’s annexes.

  • Annex II: AI systems used as a safety component or product under certain EU laws
  • Annex III: Certain types of AI systems used in specific areas

The Commission would have the power to amend both lists—essentially adding new “high-risk” AI systems or stripping existing AI systems of their “high-risk” status.

The list in Annex III of the Commission’s proposals lists 20 specific types of high-risk AI systems across the following eight areas:

  • Biometric identification and categorisation of natural persons
  • Management and operation of critical infrastructure
  • Education and vocational training
  • Employment, workers management and access to self-employment
  • Access to and enjoyment of essential private services and public services and benefits
  • Law enforcement
  • Migration, asylum and border control management
  • Administration of justice and democratic processes

Council and Parliament: High-Risk AI Systems

Both the Council and the Parliament would leave Annex II alone, but each hopes to change the list of high-risk AI systems at Annex III.

Under the Council version of the law, AI systems used for the following purposes would be removed from the Commission’s list of high-risk AI systems:

  • Deep fake detection by law enforcement
  • Crime analytics
  • Travel document authentication

The Council would add the following AI systems to the “high-risk” list:

  • AI systems used as safety components in critical digital infrastructure
  • AI systems used to assess pricing or eligibility for life and health insurance

The Council would also exclude all systems producing “purely accessory” outputs from the “high-risk” definition. If an AI system produces an output that merely aids human decision-making in an otherwise high-risk context, the AI system would not be “high-risk”.

Under the Parliament version of the law, only AI systems that pose a “significant risk” would qualify as “high risk”.

However, in addition to the items added to the “high risk” list by the Council, the Parliament would add many new types of AI systems to the “high risk” list, including (but not limited to):

  • “Emotion recognition” systems and other systems that use biometric data to make inferences about individuals (other than biometric verification systems)
  • Safety components for rail and air traffic
  • AI systems used:
    • To assess eligibility for vocational training
    • To detect cheating in education
    • For healthcare triage
    • For border checks
    • For migration and asylum forecasting
    • For alternative dispute resolution processes (in addition to legal processes)
    • To influence the outcome of elections and referendums
    • As recommender systems by social media platforms that are “Very Large Online Platforms” under the EU Digital Services Act

Additionally, under the Parliament proposals, if a distributor, importer, or user substantially modifies an AI system that is not designated “high-risk”, the AI system will automatically become a high-risk AI system.

Obligations on Operators

The AI Act imposes extensive obligations on operators of high-risk AI systems, and limited obligations on operators of certain other types of AI systems.

Foundation Models and General Purpose AI Systems

Since the Commission’s initial AI Act proposal, the use of “generative AI” tools (such as chatbots and image generators) has exploded among businesses and individuals.

The Parliament version of the AI Act would add two new concepts:

  • The “foundation model”, which is trained on a broad range of data at scale, and designed for general and adaptable outputs
  • The “general purpose AI system”, which can be used for a wide range of purposes for which it was not initially designed

Large language models (LLMs), such as OpenAI’s Generative Pre-trained Transformer (GPT), would fall under this definition.

The Parliament would impose new rules on providers of foundation models, including (among other things):

  • Conducting risk assessments
  • Only using datasets that are subject to appropriate data governance standards
  • Design and develop the model to:
    • Achieve high levels of performance, predictability, safety, and other attributes
    • Minimise energy use and waste
  • Drawing up “extensive technical documentation” and “intelligible instructions” for use of the model by downstream providers
  • Registering the model with the proposed EU database for high-risk AI systems

OpenAI, the US-based company behind the popular ChatGPT software, has suggested that it might leave the EU market if this part of the Parliament version of the law passes (the company’s CEO later retracted this statement).

Regulation and Enforcement

The AI Act would create a new body called the European Artificial Intelligence Board (EAIB). Each EU member state would designate “competent authorities” and “market surveillance authorities” to enforce and monitor the application of the AI Act.

Under the Commission and Council versions of the law, competent authorities would have the power to fine companies up to €30 million or 6% of worldwide annual turnover for the previous year (whichever is higher) for the most serious violations of the AI Act.

The Parliament hopes to raise the maximum fine to €40 million or 7% of worldwide annual turnover.

The Council would introduce a lower maximum fine for small to medium-sized enterprises (SMEs)—up to 3% of worldwide annual turnover.

When Will the AI Act Pass?

The EU’s legislative process is long and complex. But the AI Act is moving quicker than some commentators expected.

There is a chance that the AI Act passes by the end of 2023, which could mean that the law takes effect in 2025 at the earliest.

However, the EU’s three lawmaking institutions have quite different visions for how parts of the law should look. The stricter rules for public authorities envisioned by the Parliament could prove difficult for the Council to accept during the trilogue process.

We’ve covered many of the most important aspects of the law—but there are many more considerations in addition to those explored above.

The AI Act will have a substantial impact on almost every EU-based organisation using AI—and any company developing AI systems for the EU market.

As such, forward-thinking organisations have already begun preparing for the new era of AI regulation.

Looking for web analytics that do not require Cookie Banner and avoid Adblockers?
Try Wide Angle Analytics!