EU Artificial Intelligence Act: a new Byte of Regulation

The European Union (“EU”) has taken a significant step towards regulating artificial intelligence (“AI”) with the introduction of a new, landmark act. The EU Artificial Intelligence Act, Regulation (EU) 2024/1689 (“AI Act”) enters into force across all Member States on 1 August 2024.

Scope and Application of the AI Act

The AI Act aims to establish a comprehensive, risk-based, regulatory framework for the creation, development, distribution and onward use of AI systems across the EU. The AI Act will ensure that the creation and development of AI systems are undertaken in a manner that is both safe and ethical, without hindering the potential for further growth and innovation.

The AI Act is intended to apply to a variety of different roles in the AI supply chain. This includes: providers, deployers, product manufacturers, distributors and importers of AI systems (each an “AI System Operator”). The AI Act recognises that AI System Operators will have their own unique responsibilities and/or obligations, and therefore recognising the role each AI System Operator plays in the wider AI supply chain will prove critical in ensuring compliance with the AI Act itself. 

Extraterritorial effect of the AI Act

The AI Act is intended to apply to providers of AI Systems who:

  1. are located in the EU, or
  2. are located outside of the EU, but where the output produced by the AI System is used within the EU. 

Regulating through the AI Act

Recognising that differing rules will apply, depending on the type of AI System being deployed, the AI Act sets out four risk ‘categories’. These include:

  • Prohibited AI practices: practices which are considered to be a clear threat to fundamental rights and are therefore banned under the AI Act (for example, risk assessment systems which assess the risk of a person to commit a crime or re-offend).
  • High-risk AI practices: AI Systems that present a ‘high risk’ but do not pose significant harm to EU citizens (for example, medical devices).
  • Limited-risk AI practices: AI Systems that are subject to transparency requirements, but are not ‘high risk’ (for example, chatbots).
  • Minimal-risk AI practices:  AI Systems that present little to no risk (for example, AI video games)

The AI Act recognises that for the most part, AI Systems are likely to fall within the minimal-risk category and are therefore unlikely to place significant obligations on AI System Operators. 

Our Views

By establishing a clear framework with both safety and innovation in mind, the AI Act represents a significant step forward in regulating AI within the EU. It will be interesting to watch how this new regulation shapes developments in AI Systems in the near future.

Read our other recent articles:

Get in touch

The content of this page is a summary of the law in force at the date of publication and is not exhaustive, nor does it contain definitive advice. Specialist legal advice should be sought in relation to any queries that may arise.

Related expertise

Get in touch

Contact us today

Whatever your legal needs, our wide ranging expertise is here to support you and your business, so let’s start your legal journey today and get you in touch with the right lawyer to get you started.

Telephone

Get in touch

For general enquiries, please complete this form and we will direct your message to the most appropriate person.