- The European Union’s proposed Artificial Intelligence Act (AI Act) is drawing closer – with negotiations about to start between the European Commission, European Parliament and Council of Europe on the final form of the AI Act. The final AI Act is expected to be passed late 2023/early 2024.
- The proposed AI Act (as proposed by the European Parliament) takes a risk-based approach to regulating AI Systems focusing on the intended purpose of the AI System. This includes prohibiting some AI Systems (such as social scoring and predictive policing), imposing an extensive range of obligations on ‘high-risk’ AI Systems and subjecting ‘Foundation models’ such as those used by ChatGPT and Bing AI to separate governance obligations.
- It provides an insight for companies worldwide as to how AI Regulation is likely to develop globally.
- Companies should start thinking about risk mitigation steps now and put appropriate governance into place around how they will trial, use or deploy AI in their businesses.
Introduction
Two long years since the release of the first draft in April 2021, the European Union’s (EU) proposed Artificial Intelligence Act (AI Act) has passed a major milestone. On 14 June 2023, the European Parliament agreed (by a significant majority) to a final negotiating position on the AI Act.
This brings the AI Act one (significant) step closer to becoming law. The next (and final) step is the commencement of trilogue negotiations on the final form of the AI Act between the European Parliament, Council of Europe and the European Commission.
As the first significant attempt at regulating AI on a large scale, the AI Act is a major ‘must watch’ development for companies around the world.
- Once passed it will have extraterritorial effect (noting it is unlikely to come into effect for at least a year or two)
- It is likely to influence how other regulators approach the development of AI regulation around the world
- the proposals are likely to influence stopgap voluntary rules that the EU is reportedly developing with input from tech companies such as Google
- The voluntary rules are likely to fall under an “AI Pact” intended to involve all major European and non-European AI actors.
This featured insight – the latest in our generative AI and ChatGPT series – provides a high-level summary of what companies need to know about the current form of the proposed AI Act (as at June 2023). We also share tips on regulatory and governance risks companies should consider – and the steps they should take to mitigate those risks.
Finalisation of the AI Act may take another six months or more, but with detailed (and in some cases, extensive) requirements and potential penalties reaching into tens of millions of euros, it is critical that companies understand what is happening. And start to act now.
For a summary of other developments in attempts around the world to regulate AI, please see our April 2023 insight Developments in the Regulation of Artificial Intelligence.
To follow our updates, subscribe by selecting ‘Tech & Data’ as your area of interest here.
What is the proposed AI Act?
With its roots in the EU’s Ethics Guidelines for Trustworthy AI and European product safety legislation, the proposed AI Act adopts a horizontal risk-based approach to regulating the development, commodification and use of AI Systems within the EU. The Act will impose a range of obligations at various levels of the AI supply chain including providers, deployers, importers and distributors of AI systems. As outlined in the below table, the obligations will be based on the criticality of the system. In short – as the level of risk posed by the intended use of an AI System increases, so do the obligations.
Click to expand image


Importantly, companies adopting AI will need to understand and assess the regulatory risk of using AI for their proposed purpose. A developer may comply with the AI Act in making a product available, but the deployer will still need to ensure that it has complied with the AI Act for its use of the AI System. As many organisations are discovering with the explosion of interest in ChatGPT, this will require a much greater sophistication in the identification, assessment and implementation of AI in their business than may have been adopted in the past.
Key aspects about the proposed AI Act
How is AI System defined?
|
The definition of AI has long been contentious. It is often seen as too broad or not broad enough (especially as AI keeps evolving at a rapid rate!). However, the EU Parliament has now agreed upon the following definition of AI System: “a machine-based system designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions, that influence physical or virtual environments”. Notably, in a bid for global harmonisation and acceptance, this definition is closely aligned with the OECD definition of AI System. |
|
|
Who does it is apply to?
|
The proposed AI Act imposes obligations on:
|
|
|
What is the extraterritorial effect?
|
Obligations imposed on Providers apply to non-EU entities who develop an AI System with a view to:
Obligations are imposed in the same way on non-EU Importers that:
Non-EU Distributors are similarly caught by the AI Act where they make an AI System available in the EU market without affecting its properties. Practical Tip: The proposed AI Act focuses on AI Systems with output that is intended to be used in the EU by Providers or Deployers. Accordingly, when developing AI Systems, Providers outside Europe will need to consider the intended territorial scope of their AI System (and make it clear early on!). |
|
|
What obligations will be placed on high-risk AI Systems?
|
High-risk AI systems broadly capture those AI Systems that present a risk to persons and (in limited circumstances) the environment. Accordingly, the legal obligations for high-risk AI systems are relatively onerous (including fines if a Provider misclassifies an AI System that poses a significant risk) and will likely require Providers and Deployers to modify their processes. They include:
Notably, these obligations are set out at a relatively high level in the proposed AI Act. The details of what requirements providers must comply with to meet these obligations (ie the conformity assessment) will be set out in harmonised standards or (where harmonised standards have not been published) common specifications. These harmonised standards have not yet been developed. |
|
|
What happens when a serious incident occurs for high-risk AI
|
Similar to the GDPR, where a serious incident is identified by a Provider or Deployer, the Deployer must report the incident to their applicable national supervisory authority. This report must be made no later than 72 hours after the Provider (or where applicable, the Deployer) becomes aware of the incident. Tip: A serious incident is any incident that leads, or might lead to (a) the death of a person or serious damage to a person’s health, to property or the environment; or (b) a serious and irreversible disruption of the management and operation of critical infrastructure. |
|
|
What obligations will be placed on limited risk AI Systems?
|
Providers of limited risk systems are required to ensure that:
Users of AI systems that create deep fakes must disclose that the content has been artificially generated or manipulated. Although not mandatory, Providers and Deployers of limited risk and minimal risk systems are also encouraged to observe the same standards as for high-risk systems. |
|
|
What obligations will be placed on foundation models (including generative AI?)
|
Before a Provider places a foundation model on the market (regardless of whether it is a standalone model, embedded into an AI system or product, provided under open-source licences and/or offered as a service) they must comply with seven fundamental obligations:
Foundation models that are generative (such as the GPT series of models, which ChatGPT is built on) are subject to three additional obligations:
As the foundation model concept was inserted relatively late in the European Parliament process (and as it is not reflected in the European Council’s negotiating position), we expect that the obligations on foundation models will be subject to further negotiation before the final form of the law is passed. |
|
|
What about ChatGPT?
|
The GPT series of Large Language Models upon which ChatGPT was created would be considered ‘foundation models’ under the proposed AI Act. |
|
|
What types of penalties are involved?
|
Penalties for companies breaching the rules or failing to comply with the restrictions and requirements include:
Depending on the risk involved, the operator of an AI System can also be required to either correct the AI System, withdraw it from the market or recall it. |
|
|
How will the EU implement the AI Act?
|
Similar to the GDPR, the AI Act is a Regulation that will have binding legal force throughout every EU Member State. The proposal is that each Member State must establish a national supervisory authority to supervise the application and implementation of the requirements of the AI Act domestically. These national authorities will be supported by the new European AI Office (AI Office). The AI Office will be responsible for supporting the implementation of the AI Act (e.g. by issuing opinions, recommendations and guidance) and assisting Member States (including by co-ordinating joint investigations and serving as a mediator in relation to serious disagreements between relevant authorities concerning the application of the AI Act). |
|
|
What should I do now?
A final version of the AI Act is expected to pass late 2023/early 2024 (with an implementation period to follow). Given the current speed of AI adaptation, it is crucial for those developing or using AI systems to start thinking about likely AI regulation, governance and risk mitigation steps now. Furthermore, given the impact of the EU’s Global Data Protection Regulation (GDPR) on shaping how regulators around the world approach privacy, it is likely that the AI Act will heavily influence how regulators approach AI Governance. For example, in Australia the Department of Industry, Science and Resources’ Discussion Paper On Safe And Responsible AI In Australia references the AI Act and alludes to the possibility of adopting a similar risk-based approach.
Although the proposed AI Act is not the only framework upon which AI system users can currently model their governance structure, it does provide examples of core tenets that companies can implement when developing or deploying AI Systems. These core tenets will be especially relevant to companies who have European operations or plans to expand into Europe.
Some key questions to consider (which will vary depending on whether you are a provider or deployer of AI Systems) include:
- Do you have an AI governance framework in place? This includes processes (e.g. an AI Impact Assessment) to assess the risks posed by AI Systems throughout their lifecycle.
- Have you considered what data your AI system will be or has been trained upon? Is the data fit for purpose? Have you appropriately vetted it for errors (or, more realistically, can you vet it for errors)? Have you considered how the data used by your AI system interacts with privacy law and copyright law – particularly where data is collected from the webpages in large quantities?

- Have you produced appropriate technical documentation about how the AI System works (including what data its trained upon, how it was developed and its output)?
- Do you need to produce instructions to enable users to interpret the output and use the AI System appropriately?
- Have you designed and developed the AI System in a way that achieves an appropriate level of accuracy and robustness?
- Should end-users be made aware that they are interacting with an AI System and/or that the outputs have been produced by an AI System?
- Have you implemented appropriate cybersecurity protection? Do you have systems in place to monitor the operation of your AI System?
- Can humans oversee your AI system while it is in use?
Although this topic has not yet received much public attention, the EU has placed a particular focus on the energy efficiency of foundation models (both during design and development). Accordingly, if you are developing or deploying a foundation model, you should also look at how you can reduce energy use, reduce resource use/waste and increase the overall efficiency of the AI System.
Start acting now
The proposed AI Act still has some way to go before it is finalised with much political (dis)agreement over its scope and content expected during the upcoming negotiations between the European Parliament, European Commission and Council of Europe. Furthermore, key concepts in the AI Act (such as the harmonised standards and regulatory sandboxes to facilitate the development and testing of AI systems under strict regulatory oversight before being placed on the market) have not yet been developed in detail. But now is the time to start thinking about necessary steps.
It is also important to note that the AI Act is only the beginning of regulatory movements in the EU directed at AI Systems. For example, the European Commission has proposed a new ‘Product Liability Directive’ (to address liability for software products, including AI Systems) and the ‘AI Liability Directive’ (to adapt non-contractual civil liability rules to AI Systems). The EU is a must watch space for the future of AI regulation.
Disclaimers: No foundation models were used in the development of this piece. In a possible sign of emergent intelligence, ChatGPT refuses to ingest the kind of lengthy texts as those generated by EU lawmakers, forcing us to rely on our still superior biological attention mechanisms.