Featured Insight,

Europe’s AI regulation gets real : what to know (and do) about the EU AI Act as it nears finalisation

AU | EN
Current site :    AU   |   EN
Australia
China
China Hong Kong SAR
Japan
Singapore
United States
Global

Introduction

Two long years since the release of the first draft in April 2021, the European Union’s (EU) proposed Artificial Intelligence Act (AI Act) has passed a major milestone. On 14 June 2023, the European Parliament agreed (by a significant majority) to a final negotiating position on the AI Act.

This brings the AI Act one (significant) step closer to becoming law. The next (and final) step is the commencement of trilogue negotiations on the final form of the AI Act between the European Parliament, Council of Europe and the European Commission.

As the first significant attempt at regulating AI on a large scale, the AI Act is a major ‘must watch’ development for companies around the world.

  • Once passed it will have extraterritorial effect (noting it is unlikely to come into effect for at least a year or two)
  • It is likely to influence how other regulators approach the development of AI regulation around the world
  • the proposals are likely to influence stopgap voluntary rules that the EU is reportedly developing with input from tech companies such as Google
  • The voluntary rules are likely to fall under an “AI Pact” intended to involve all major European and non-European AI actors.

This featured insight – the latest in our generative AI and ChatGPT series – provides a high-level summary of what companies need to know about the current form of the proposed AI Act (as at June 2023). We also share tips on regulatory and governance risks companies should consider – and the steps they should take to mitigate those risks.

Finalisation of the AI Act may take another six months or more, but with detailed (and in some cases, extensive) requirements and potential penalties reaching into tens of millions of euros, it is critical that companies understand what is happening. And start to act now.

For a summary of other developments in attempts around the world to regulate AI, please see our April 2023 insight Developments in the Regulation of Artificial Intelligence.

To follow our updates, subscribe by selecting ‘Tech & Data’ as your area of interest here

What is the proposed AI Act?

With its roots in the EU’s Ethics Guidelines for Trustworthy AI and European product safety legislation, the proposed AI Act adopts a horizontal risk-based approach to regulating the development, commodification and use of AI Systems within the EU. The Act will impose a range of obligations at various levels of the AI supply chain including providers, deployers, importers and distributors of AI systems. As outlined in the below table, the obligations will be based on the criticality of the system. In short – as the level of risk posed by the intended use of an AI System increases, so do the obligations.

Click to expand image

Click to expand image

Importantly, companies adopting AI will need to understand and assess the regulatory risk of using AI for their proposed purpose. A developer may comply with the AI Act in making a product available, but the deployer will still need to ensure that it has complied with the AI Act for its use of the AI System. As many organisations are discovering with the explosion of interest in ChatGPT, this will require a much greater sophistication in the identification, assessment and implementation of AI in their business than may have been adopted in the past. 

Key aspects about the proposed AI Act

How is AI System defined?

The definition of AI has long been contentious. It is often seen as too broad or not broad enough (especially as AI keeps evolving at a rapid rate!). However, the EU Parliament has now agreed upon the following definition of AI System:

“a machine-based system designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions, that influence physical or virtual environments”.

Notably, in a bid for global harmonisation and acceptance, this definition is closely aligned with the OECD definition of AI System.

Who does it is apply to?

The proposed AI Act imposes obligations on:

  • Providers of AI systems who develop an AI system with a view to placing it on the market or putting it into service (whether for payment or free) in the EU
  • Deployers of AI systems (formally users) who use an AI system under their authority (excluding in the course of a personal non-professional activity) in the EU
  • Importers of AI Systems established in the EU that place AI Systems on the market or into service in the EU
  • Distributors of AI Systems (other than the provider or importer) that make an AI System available in the EU
What is the extraterritorial effect?

Obligations imposed on Providers apply to non-EU entities who develop an AI System with a view to:

  • making it available on the EU market; or
  • supplying it for distribution or use in the EU market in the course of commercial activity.

Obligations are imposed in the same way on non-EU Importers that:

  • first make an AI system available on the EU market; or
  • supply an AI System for distribution or use in the EU in the course of commercial activity.

Non-EU Distributors are similarly caught by the AI Act where they make an AI System available in the EU market without affecting its properties.

Practical Tip: The proposed AI Act focuses on AI Systems with output that is intended to be used in the EU by Providers or Deployers.

Accordingly, when developing AI Systems, Providers outside Europe will need to consider the intended territorial scope of their AI System (and make it clear early on!).

What obligations will be placed on high-risk AI Systems?

High-risk AI systems broadly capture those AI Systems that present a risk to persons and (in limited circumstances) the environment.

Accordingly, the legal obligations for high-risk AI systems are relatively onerous (including fines if a Provider misclassifies an AI System that poses a significant risk) and will likely require Providers and Deployers to modify their processes. They include:

  • Risk management system: A risk management system must be established, implemented, documented and maintained throughout the lifecycle of a high-risk AI System.
  • Data and data governance: Appropriate data governance/management practices must be applied. The training of High-risk AI Systems should be based on data sets (training, validation and testing) that (among other requirements) are appropriate for the intended purpose of the AI System, are sufficiently representative, appropriately vetted for errors, and as complete as possible.
  • Technical documentation: Specified technical documentation must be (a) drawn up before a high-risk AI System is placed on the market or put into service and (b) kept up to date.
  • Record-keeping: A high-risk AI System must be designed and developed with state-of-the-art logging capabilities (including for operation monitoring, post market monitoring and energy consumption/environmental impact).
  • Transparency and provision of information to users: A high-risk AI System must be designed and developed in such a way to ensure that its operation is sufficiently transparent to enable users to interpret its output and use it appropriately. It must also be accompanied by instructions that outline specified information about the AI System, e.g. the level of accuracy, robustness and cybersecurity against which it has been tested/validated.
  • Human oversight: A high-risk AI system must be designed and developed in such a way, including with appropriate human-machine interface tools, that it can be effectively overseen by natural persons while in use.
  • Accuracy, robustness and cybersecurity: A high-risk AI System must be designed, developed and deployed in such a way that it achieves, in the light of its intended purpose, an appropriate level of accuracy, robustness and cybersecurity.
  • (For Deployers) Fundamental Rights Impact Assessment: Before using a high-risk AI System, Deployers must conduct an assessment of the impact of the AI System. This includes considering what the foreseeable impact on fundamental rights and the risks of harm (to both vulnerable persons and the environment) are if the AI System is put into use, and developing a detailed plan of how the Deployer will mitigate those negative impacts.
  • (For Deployers) Right to explanation: Where an individual is subject to a decision taken by a Deployer on the basis of output made by a high-risk AI System which produces legal effects, or which similarly significantly affects that individual, they will have the right to request a clear and meaningful explanation of the role of the AI System in the decision-making procedure, the main parameters of the decision and the related input data.

Notably, these obligations are set out at a relatively high level in the proposed AI Act. The details of what requirements providers must comply with to meet these obligations (ie the conformity assessment) will be set out in harmonised standards or (where harmonised standards have not been published) common specifications. These harmonised standards have not yet been developed.

What happens when a serious incident occurs for high-risk AI

Similar to the GDPR, where a serious incident is identified by a Provider or Deployer, the Deployer must report the incident to their applicable national supervisory authority. This report must be made no later than 72 hours after the Provider (or where applicable, the Deployer) becomes aware of the incident.

Tip: A serious incident is any incident that leads, or might lead to (a) the death of a person or serious damage to a person’s health, to property or the environment; or (b) a serious and irreversible disruption of the management and operation of critical infrastructure.

What obligations will be placed on limited risk AI Systems?

Providers of limited risk systems are required to ensure that:

  • where the AI System is intended to interact with a person, it is designed and developed in such a way that it is clear to the person that they are interacting with an AI system, and
  • subjects of emotion recognition systems/biometric categorisation systems are informed of the system.

Users of AI systems that create deep fakes must disclose that the content has been artificially generated or manipulated.

Although not mandatory, Providers and Deployers of limited risk and minimal risk systems are also encouraged to observe the same standards as for high-risk systems.

What obligations will be placed on foundation models (including generative AI?)

Before a Provider places a foundation model on the market (regardless of whether it is a standalone model, embedded into an AI system or product, provided under open-source licences and/or offered as a service) they must comply with seven fundamental obligations:

  • demonstrate how they have reduced and mitigated reasonably foreseeable risks to health, safety, fundamental rights, the environment and democracy and the rule of law
  • only incorporate datasets that are subject to suitable data governance (this includes considering the suitability of particular datasets and how to mitigate possible biases)
  • design and develop the foundation model to achieve appropriate quality (including predictability, interpretability, corrigibility, safety and cybersecurity)
  • apply energy efficiency standards to both the design and development of foundation models
  • draw up extensive technical documentation and intelligible use instructions
  • establish quality management systems to document compliance with the AI Act, and
  • register the foundation model in the EU Database (which we note is publically accessible).

Foundation models that are generative (such as the GPT series of models, which ChatGPT is built on) are subject to three additional obligations:

  • informing end users that they are interacting with an AI System (note this applies to all generative models not just Chatbots!)
  • implementing safeguards against generating content that violates EU Law, and
  • making publicly available a “sufficiently detailed” summary of what copyright protected data was used to train the model.

As the foundation model concept was inserted relatively late in the European Parliament process (and as it is not reflected in the European Council’s negotiating position), we expect that the obligations on foundation models will be subject to further negotiation before the final form of the law is passed.

What about ChatGPT?

The GPT series of Large Language Models upon which ChatGPT was created would be considered ‘foundation models’ under the proposed AI Act.

What types of penalties are involved?

Penalties for companies breaching the rules or failing to comply with the restrictions and requirements include:

  • Non-compliance with the unacceptable risk AI system prohibition (e.g. by putting an unacceptable risk AI System on the market): administrative fine of up to 40,000,000 EUR or 7% of total worldwide annual turnover (whichever is higher)
  • Non-compliance with the data/data governance requirements and transparency requirements for high-risk AI Systems): administrative fine of up to 20,000,000 EUR or 4% of total worldwide annual turnover (whichever is higher)
  • Non-compliance with any other requirements/obligations under the proposed AI Act: administrative fine of up to 10,000,000 EUR or 2% of total worldwide annual turnover (whichever is higher) and
  • Supply of incorrect incomplete or misleading information to notified bodies and national authorities: administrative fine of up to up to 5,000,000 EUR or 1% of total worldwide annual turnover (whichever is higher).

Depending on the risk involved, the operator of an AI System can also be required to either correct the AI System, withdraw it from the market or recall it.

How will the EU implement the AI Act?

Similar to the GDPR, the AI Act is a Regulation that will have binding legal force throughout every EU Member State. The proposal is that each Member State must establish a national supervisory authority to supervise the application and implementation of the requirements of the AI Act domestically.

These national authorities will be supported by the new European AI Office (AI Office). The AI Office will be responsible for supporting the implementation of the AI Act (e.g. by issuing opinions, recommendations and guidance) and assisting Member States (including by co-ordinating joint investigations and serving as a mediator in relation to serious disagreements between relevant authorities concerning the application of the AI Act).

What should I do now?

A final version of the AI Act is expected to pass late 2023/early 2024 (with an implementation period to follow). Given the current speed of AI adaptation, it is crucial for those developing or using AI systems to start thinking about likely AI regulation, governance and risk mitigation steps now. Furthermore, given the impact of the EU’s Global Data Protection Regulation (GDPR) on shaping how regulators around the world approach privacy, it is likely that the AI Act will heavily influence how regulators approach AI Governance. For example, in Australia the Department of Industry, Science and Resources’ Discussion Paper On Safe And Responsible AI In Australia references the AI Act and alludes to the possibility of adopting a similar risk-based approach.

Although the proposed AI Act is not the only framework upon which AI system users can currently model their governance structure, it does provide examples of core tenets that companies can implement when developing or deploying AI Systems.  These core tenets will be especially relevant to companies who have European operations or plans to expand into Europe.

Some key questions to consider (which will vary depending on whether you are a provider or deployer of AI Systems) include:

  • Do you have an AI governance framework in place? This includes processes (e.g. an AI Impact Assessment) to assess the risks posed by AI Systems throughout their lifecycle.
  • Have you considered what data your AI system will be or has been trained upon? Is the data fit for purpose? Have you appropriately vetted it for errors (or, more realistically, can you vet it for errors)? Have you considered how the data used by your AI system interacts with privacy law and copyright law – particularly where data is collected from the webpages in large quantities?
  • Have you produced appropriate technical documentation about how the AI System works (including what data its trained upon, how it was developed and its output)?
  • Do you need to produce instructions to enable users to interpret the output and use the AI System appropriately?
  • Have you designed and developed the AI System in a way that achieves an appropriate level of accuracy and robustness?
  • Should end-users be made aware that they are interacting with an AI System and/or that the outputs have been produced by an AI System?
  • Have you implemented appropriate cybersecurity protection? Do you have systems in place to monitor the operation of your AI System?
  • Can humans oversee your AI system while it is in use?

Although this topic has not yet received much public attention, the EU has placed a particular focus on the energy efficiency of foundation models (both during design and development). Accordingly, if you are developing or deploying a foundation model, you should also look at how you can reduce energy use, reduce resource use/waste and increase the overall efficiency of the AI System.

Start acting now

The proposed AI Act still has some way to go before it is finalised with much political (dis)agreement over its scope and content expected during the upcoming negotiations between the European Parliament, European Commission and Council of Europe. Furthermore, key concepts in the AI Act (such as the harmonised standards and regulatory sandboxes to facilitate the development and testing of AI systems under strict regulatory oversight before being placed on the market) have not yet been developed in detail. But now is the time to start thinking about necessary steps.

It is also important to note that the AI Act is only the beginning of regulatory movements in the EU directed at AI Systems. For example, the European Commission has proposed a new ‘Product Liability Directive’ (to address liability for software products, including AI Systems) and the ‘AI Liability Directive’ (to adapt non-contractual civil liability rules to AI Systems). The EU is a must watch space for the future of AI regulation.

Disclaimers: No foundation models were used in the development of this piece. In a possible sign of emergent intelligence, ChatGPT refuses to ingest the kind of lengthy texts as those generated by EU lawmakers, forcing us to rely on our still superior biological attention mechanisms.

LATEST THINKING
Insight
The ACCC has released its draft merger assessment guidelines for public consultation. The guidelines outline the ACCC’s approach to analysing the potential effects of mergers on competition under the new mandatory merger clearance regime, which will formally commence on 1 January 2026.

21 March 2025

Insight
Over recent years, the WA Government has started to focus on facilitating decarbonisation in the Pilbara region, which hosts a significant portion of Australia’s mining sector. This push now continues with the release of two further consultation papers on electricity reform in the Pilbara.

21 March 2025

Insight
The Victorian Government has indicated support for significant reforms to the Building and Construction Industry Security of Payment Act 2002 (Vic).

21 March 2025