Skip to main content

The EU's AI Act – a guide

We have summarised the EU's groundbreaking AI Act and its most important provisions for you. You can also find a timeline for compliance with the new AI rules here:

What is the AI Act?

The EU Artificial Intelligence Act is the EU's new flagship regulation on artificial intelligence. The final text of the AI Act was published in the EU Official Journal on 12 July 2024. After it comes into force on 1 August 2024, the AI Act will have a significant impact on organisations that develop or use AI, both in the EU and beyond.

The AI Act will impose risk- and technology-based obligations on organisations that develop, use, distribute or import AI systems in the EU, which could result in heavy fines (up to €35 million or 7% of global annual turnover) for non-compliance.

How will the AI Act be applied?

The application of the AI Act depends on the respective AI technology, the use case and the role of the operator. The approach is largely risk-based and can be roughly summarised as follows:

  • AI systems for certain uses will be prohibited.
  • Certain AI systems are classified as high-risk AI systems and are subject to extensive obligations, particularly for providers.
  • There will be specific provisions for general-purpose AI models. These models will be regulated regardless of the use case.
  • Other AI systems are considered low-risk. These AI systems will only be subject to limited transparency obligations when interacting with individuals.

When does the AI Act apply?

The AI Act was published in the EU Official Journal on 12 July 2024 and entered into force on 1 August 2024 (i.e. 20 days after publication).

Most provisions of the AI Act will apply after a two-year transposition period (i.e. from 1 August 2026). During this period, various supporting delegated legislation, guidelines and standards will be published to facilitate compliance with the AI Act.

This two-year period is subject to some important exceptions: the prohibitions for certain AI systems and the AI competence requirements apply after six months (i.e. from 1 February 2025), while the requirements for general-purpose AI systems apply after twelve months (i.e. from 1 August 2025).

Definition of AI systems

Most of the obligations under the AI Act apply to AI systems.

The definition of an AI system is broad: ‘a machine system designed to operate at different levels of autonomy and that may exhibit adaptability after deployment and that, for explicit or implicit objectives, infers from incoming data how to generate outputs such as predictions, content, recommendations or decisions that may affect physical or virtual environments’.

Separate obligations apply to certain general-purpose AI models, which may underlie a wide range of different AI systems.

Prohibited AI systems

The AI Act will prohibit the use of certain types of AI systems. The prohibitions include (among others):

  • Certain AI systems for biometric categorisation and identification, including those for the untargeted harvesting of facial data from the internet.
  • AI systems that use subliminal techniques, exploit vulnerabilities or manipulate human behaviour to circumvent fundamental rights or cause physical or psychological harm.
  • AI systems for emotion recognition in law enforcement, border protection, the workplace and education.
  • AI systems for social evaluation or classification of natural persons or groups of persons over a period of time based on their social behaviour.

High-risk AI systems

The strictest regulatory obligations under the AI Act apply to high-risk AI systems (so-called High-Risk AI Systems or HRAIS). These are AI systems in areas covered by existing EU product safety legislation and those intended for use for certain purposes, in particular in the following areas:

  • AI-systems used as safety-critical components in the management and operation of essential public infrastructure, such as water, gas and electricity supply.
  • AI-systems used to determine access to educational institutions or to evaluate students, such as AI-systems used to grade exams.
  • AI systems used in hiring and employment, such as placing job ads, evaluating candidates or reviewing job applications, making decisions on promotions or dismissals, or reviewing work.
  • AI systems used in migration, asylum and border control management or in various other law enforcement and justice contexts.
  • AI systems used to influence the outcome of democratic processes or the voting behaviour of constituents.
  • AI systems used in the insurance and banking sectors.

The list of high-risk AI-systems is not exhaustive and may be added to in the future if further high-risk AI applications emerge.

The obligations summarised below for high-risk AI-systems apply primarily to providers of AI-systems and not to other operators. Providers are likely to be those who develop or procure an AI-system for the purpose of placing it on the market or putting it into operation under their own name or trademark.

Other operators (including users, distributors and importers) also have fewer obligations. Other operators may also be considered providers under certain circumstances, such as if they substantially modify a high-risk AI system or commission it in their own name.

Providers of high-risk AI-systems are subject to extensive substantive obligations with regard to these AI-systems, in particular the following:

  • Risk management system: Procedures must be implemented for the entire life cycle of the AI-system to identify, analyse and mitigate risks.
  • Data and data management measures: Training and testing of AI-systems must be carried out in accordance with strict data management measures.
  • Technical documentation: Creation of a comprehensive ‘manual’ for the AI system, containing specific minimum information.
  • Record retention: High-risk AI systems must be designed to ensure automatic logging of events, including, for example, usage time and input data. These must be kept by providers for specified periods.
  • Transparency: high-risk AI systems must be accompanied by instructions for use that provide detailed information about their characteristics, capabilities and limitations.
  • Human oversight: high-risk AI systems must be designed to be overseen by humans, who should meet various requirements, such as the ability to understand the AI system (‘AI literacy’) and to stop its use.
  • Accuracy, robustness and cybersecurity: High-risk AI systems must be accurate (with accuracy metrics in the instructions for use), resilient to errors or inconsistencies (e.g. through fail-safe plans) and resilient to cyberattacks.
  • Quality management system: Providers of high-risk AI systems must establish a comprehensive quality management system.
  • Post-market surveillance: Suppliers of high-risk AI systems must document a system for collecting and analysing data provided by users on the performance of the AI system throughout its lifetime.

Providers of high-risk AI systems are also subject to various procedural obligations before they can provide such an AI system:

  • CE marking: Providers must ensure that their AI system undergoes a conformity assessment procedure before it is provided and affix a CE marking to their documentation.
  • Registering with the EU database: Providers and public sector bodies using high-risk AI-systems must register the system in an EU-wide database for AI-systems.
  • Reporting requirement: Providers of high-risk AI-systems must report serious incidents or malfunctions related to their AI-system to a competent authority within 15 days.

Other operators of high-risk AI systems are subject to more limited obligations, such as conducting fundamental rights impact assessments, ensuring that they use the system in accordance with the instructions for use, monitoring its operation and keeping records of the logs generated by the AI system (provided they have control over them).

General-purpose AI

AI technologies that are not prohibited or deemed high-risk are subject to much less stringent regulatory requirements.

The most stringent other requirements under the AI Act apply to general-purpose AI (GPAI). The requirements for most general-purpose AI models, which include foundation models and generative AI models, focus primarily on transparency.

The obligations for all general-purpose AI systems include the creation of technical documentation, compliance with EU copyright law and the provision of summaries of training data.

The final text includes additional requirements for general-purpose AI systems that are trained on extensive data sets and demonstrate superior performance; this is based on the potential systemic risks that these AI models may pose across the value chain (general-purpose AI systems with systemic risk).

Any general-purpose AI model with systemic risk will be subject to additional requirements, which are expected to include:

  • Rigorous model assessments, including adversarial testing/red teaming.
  • Assessment and mitigation of possible systemic risks posed by the use of the general-purpose AI system.
  • Stronger reporting requirements to regulators, particularly in the event of serious incidents.
  • Ensuring adequate cybersecurity for the general-purpose AI system with systemic risk.
  • Reporting on the energy efficiency of the general-purpose AI system.

Other AI systems

Apart from the above and two specific exceptions (military or defence; research and innovation), the only binding requirement for other AI systems is a limited transparency requirement: providers must ensure that AI systems intended to interact with individuals are designed and developed in such a way that individual users are aware that they are interacting with an AI system.

However, there is a general obligation for all users and providers of AI systems to ensure that their personnel involved in the operation and use of AI systems have sufficient AI knowledge. The appropriate level of AI knowledge depends on the training, expertise and technical knowledge of the personnel, as well as on the context in which the AI systems in question are to be used.

Sanctions

The sanctions to be imposed under the AI Act can be very high and range from € 7.5 million (or 1.5% of global annual turnover) to € 35 million (or 7% of global annual turnover) for the previous financial year, depending on (i) the type of violation and (ii) the size of the company.

News Categories
Subscribe to Newsletter

Subscribe to our newsletter and never miss any news from PFR again!

You have questions?
We have answers.