Insight,

Developments in the regulation of Artificial Intelligence

AU | EN
Current site :    AU   |   EN
Australia
China
China Hong Kong SAR
Japan
Singapore
United States
Global

TL;DR

  • China currently has the most comprehensive suite of AI regulations in the world, including its newly released draft measures for managing generative AI.
  • The EU is looking to introduce its new AI legislative framework this year to promote and manage the risks of AI.
  • Australia, like the rest of the world, is considering further regulatory measures, while relying on existing law and managing risk by rolling out guidelines for best practices in AI. 

Artificial intelligence (AI) has captured the attention of the world over the last 12 months. From AI chatbots to AI-generated art and inventions, AI has the potential to radically transform our economy, our society, and humanity.

Recently, the Future of Life Institute published an open letter signed by over 1000 AI experts warning that AI could pose ‘profound risks to society and humanity’. The letter called on all AI labs to immediately pause the training of AI systems more powerful than GPT-4 for at least 6 months. This letter has drawn fierce criticism from Australia and around the world, not the least of which is the fact that any global ‘pause’ on research and training of AI systems is improbable, if not, impossible.

However, the letter has drawn focus to the broader discussions that were already underway around the world on the practical implications, ethics, and future of AI, and has led to calls for multilateral cooperation. These issues are already being examined intensely by scientists, researchers, businesses, and international institutions around the world. Governments are considering how they can encourage the potential world-changing economic and social benefits of AI while managing the potential risks, challenges and downsides.

This will inevitably involve greater regulation of AI that all businesses, large and small, will need to be aware of moving forward. This Insight examines the current state of AI regulation in Australia and around the world, and the next steps signalled by governments. 

Jump to:

AUSTRALIA

 

No AI-specific legislation has yet been introduced in Australia, and any AI governance relies on guidance and existing law. This has led to a number of calls for an AI-specific legislative framework to handle the challenges and issues relating to this new technology. 

In March 2022, the Department of the Prime Minister and Cabinet released an issues paper on the regulation of AI calling for submissions on how government regulatory settings and systems could maximise opportunities to enable and facilitate the responsible use of AI.

The issues paper refers to some recent developments in this space, namely:

The Federal Government is yet to release an official response to submissions received in response the issues paper and subsequent consultation with stakeholders. Some submissions have been made public, such as submissions from the Law Council of Australia and KPMG. The overarching theme of these submissions is that there is a need for a regulatory body that will be responsible for developing and enforcing AI legislation. However, none of the above is currently binding and businesses and organisations are left to rely on guidance and existing laws such as IP, administrative, consumer protection, discrimination, and privacy laws.

Following the commencement of the Online Safety Act 2021 in January 2022, the National eSafety Commissioner has used its powers to issue legal notices under its Basic Online Safety Expectations to compel some important algorithmic transparency insights from the technology industry around online recommendation systems. The National eSafety Commissioner has also suggested that their Safety by Design principles could apply to any new technology such as AI, and that ‘[s]hifting the burden of proof onto technology companies to show they have taken meaningful steps to assess risks, understand potential harms and engineer out potential misuse before these products are released into the wild could make a decisive difference.’

In March 2023, the CSIRO launched a Responsible AI Network, through its National AI Centre (NAIC), to support Australian companies in using and creating AI ethically, safely and optimally. This follows the launch of the NAIC’s Australia’s AI Ecosystem Momentum report, outlining how local AI use is rapidly maturing.

At a state level, in 2022, NSW enacted its AI Assurance Framework to assist state agencies to design, build and use AI-enabled products and solutions and help them identify risks that may be associated with their projects. This framework is mandatory for all NSW government agencies.

EUROPEAN UNION (EU)

The EU is set to finalise the first ever Legal Framework on AI this year. This framework will operate as legislation, dubbed the EU AI Act and will, for the first time, provide a legislative definition of AI. The definition proposed aims to be as technology neutral and futureproof as possible, and as such, is intentionally broad. Specifically, the AI Act proposes to define an AI system as ‘a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate output such as predictions, recommendations, or decisions influencing physical or virtual environments.’

The AI Act also establishes the regulation of AI based on classifying AI products into one of three risk categories: an unacceptable risk, a high risk, or a low or minimal risk. Different regulation will apply depending on the risk the product carries.

For example, AI systems with an unacceptable risk are prohibited, and placing products with unacceptable levels of risk on the market can lead to fines of up to €30,000 EUR or 6% of annual turnover. Examples of systems that would be categorised as unacceptable risk include AI that can materially distort a person’s behaviour, that could cause psychological harm, or a system that could exploit vulnerabilities of a specific group of persons due to age or disability. 

An AI system will be categorised as high risk if it is:

  • intended to be used as a safety component of an AI system that is intended to be used as safety component of products that are subject to third party ex-ante conformity assessment, or
  • a stand-alone AI system that impacts fundamental rights, such as biometrical information and the categorisation of natural persons, the management of critical infrastructure, education and vocational training, law enforcement, and the administration of justice and democratic processes.

High risk AI systems will need to meet certain legal requirements in relation to data and data governance, documentation and record keeping, transparency and provision of information to users, human oversight, robustness, accuracy and security.

Low risk systems will be less regulated, with the only requirement being that the user must be made aware that they are interacting with AI (unless this is already obvious). Which AI systems fall into the low-risk category is still being debated.

The European Parliament is scheduled to vote on the AI Act in the first half of this year. Following this vote, discussions between the Member States, the Parliament and the Commission will begin, and the final AI Act is predicted to be adopted by the end of 2023.

Currently, the EU does provide some rules around automated decision-making under Article 22 of the General Date Protection Regulation (GDPR), including the right not to be subject to a decision based solely on automated processing that produces legal effects concerning the subject or similarly significantly affects the subject, unless such a decision is necessary, authorised and based on explicit consent as outlined in Art 22(2).

On 4 April 2023, Italy became the first country in the West to ban ChatGPT, the popular AI chatbot, albeit temporarily. ChatGPT would be covered by the new AI Act’s rules, however, this decision has caused other European Member States such as Ireland and France to seek to learn more about ChatGPT and the findings of Italy’s regulators. The European privacy regulators decided on 13 April 2023 to launch a dedicated task force to address the privacy concerns related to the world’s most famous chatbot. Only time will tell whether these measures will have any significant impact on the final version of the AI Act. 

UNITED KINGDOM (UK)

The UK Government has taken a different approach to the EU by utilising their existing resources.

In a 2023 White Paper from the Department for Science, Innovation and Technology, the UK government points to its already ‘strong approach to the rule of law, supported by [its] technology-neutral legislation and regulations’ and highlights that its laws, regulators and courts already address some of the emerging risks posed by AI technologies, through existing laws such as discrimination, product safety, and consumer law. It also pointed to its regulators such as the UK Medicines and Healthcare products Regulatory Agency that published a roadmap in 2022 clarifying in a guidance the requirements for AI and software used in medical devices.

The Department for Digital, Culture, Media and Sport has also issued guidance on data ethics for public sector organisations (the Data Ethics Framework) and the National Cyber Security Centre has provided guidance on Intelligent Security Tools.

The UK Government intends to continue on this same path. In a 2023 White Paper, which is based on its 2021 10-year National AI Strategy, it proposes that instead of introducing a new commissioner or regulatory body to be responsible for the development and regulation of AI, the UK will rely on existing regulators such as the Health and Safety Executive, the Equality and Human Rights Commission, and the Competition and Markets Authority to establish their own processes to regulate AI in their respective industries.

This means that instead of introducing new laws to regulate the ever-growing area of AI, these regulators will have to use the powers they already have. The White Paper outlines five principles that the proposed regulators should consider in order to enable the safe and innovative use of AI. These principles are:

  • Safety, security and robustness: ensuring that AI operates safely, and risks are carefully managed.
  • Transparency and explainability: meaning organisations should be able to adequately explain and communicate their AI system and the processes involved.
  • Fairness: the AI system should comply with all relevant UK laws.
  • Accountability and governance: ensuring there is adequate oversight of the AI system.
  • Contestability and redress: there must be a clear way to dispute any outcome, particularly harmful outcomes, or decisions generated by AI.

In that spirit, the Alan Turing Institute started a pilot of an AI Standards Hub in 2022 that is charged to create practical tools for businesses, bring the UK’s AI community together through a new online platform, and develop educational materials to help organisations develop and benefit from global standards.

The Information Commissioner’s Office has also developed guidance on how organisations can explain their use of AI to individuals, Guidance on AI and Data Protection and an AI and Data Protection Risk Toolkit.

Like the EU, the UK’s GDPR provides rules around automated decision-making under their version of Article 22.

In March 2023, the Department for Education released its policy paper Generative artificial intelligence in education.

Over the next twelve months, the UK government will be asking all nominated regulators to issue practical guidance to organisations within their respective industries to outline how to effectively implement the above principles.

UNITED STATES

The United States does not currently have a comprehensive legal framework to regulate AI’s development and use and appears to be taking a similar path as the UK.

The first official federal foray into the AI regulatory space was in 2020, when the White House issued a Guidance for Regulation of Artificial Intelligence Applications. This guidance established a framework for federal agencies to assess potential regulatory and non-regulatory approaches to emerging AI issues and includes ten principles to guide US agencies when deciding whether and how to regulate AI.

Since then US agencies have been working on the regulation of AI in their sectors, such the Department of Defence’s report, AI Principles: Recommendations on the Ethical Use of Artificial Intelligence by the Department of Defense, the Food and Drug Administration’s Artificial Intelligence and Machine Learning in Software as a Medical Device Action Plan, the Federal Trade Commission’s Using Artificial Intelligence and Algorithms guidance, the Department of Health & Human Services, Trustworthy AI Playbook and the recent rules issued by the Consumer Financial Protection Bureau to implement Dodd-Frank Act standards governing the use of ‘automated valuation models’ in the housing market.

A bill was introduced in Congress in June 2022 to counter the risks of AI being abused in ‘ways that may pose a catastrophic risk’ but that bill has not moved forward. In October 2022, the White House issued a Blueprint for an AI Bill of Rights outlining five key protections for citizens in the new era of AI, namely:

  • safe and effective systems
  • algorithmic discrimination protection
  • data privacy
  • notice and explanation and
  • human alternatives, consideration, and fallback.

This blueprint, however, is non-binding. Then, on 26 January 2023, the National Institute of Standards and Technology released the AI Risk Management Framework 1.0, a guidance document to help manage AI’s potential risks to individuals, organizations and society, which is, again, non-binding.

In a rare move, the US Chamber of Commerce in March 2023 called for the regulation of artificial intelligence technology to ensure it does not hurt growth or become a national security risk. It remains to be seen whether Congress will act to introduce more comprehensive regulation, but state and local governments have now started to address these issues. For example, multiple bills and resolutions regulating AI have been introduced and enacted in various states, such as Illinois’ Artificial Intelligence Video Interview Act, which provides that employers must notify applicants before a videotaped interview that artificial intelligence may be used to analyse the interview and consider the applicant's fitness for the position. New York City Council has also enacted its own local New York City Bias Audit Law, which prohibits companies from using automated tools to hire candidates or promote employees, unless the tools have been independently audited for bias. This law commences on 15 April 2023.

On 12 April 2023, the US National Telecommunications and Information Administration (NTIA) launched a request for public comment on potential accountability measures for AI to ensure ‘that AI systems are legal, effective, ethical, safe, and otherwise trustworthy.’ The NTIA states that the insights gathered from this consultation process will inform the Biden Administration’s work to ensure a cohesive and comprehensive federal government approach to AI-related risks and opportunities.

On 13 April 2023, US Senator Chuck Schumer launched an effort to establish rules on artificial intelligence to address national security and education concerns. Any such proposal could take months to become law, even if it were to attract bipartisan support in Congress.

CHINA

China has been seen by many as ahead of the curve when it comes to the regulation of AI.

China’s regulations have been guided since 2017 by the State Council of the People's Republic of China's A Next Generation Artificial Intelligence Development Plan. This plan promoted the development of AI, as well the laws, regulations, and ethical standards that support that development.

As part of this plan, in 2021 the National Special Committee of New-Generation Artificial Intelligence Governance issued a Code of Ethics for New-generation Artificial Intelligence, that aims to integrate ethics and morals into the whole life cycle of AI and provide ethical guidance for stakeholders engaged in AI-related activities. It includes six fundamental ethical standards that must be complied with:

  • improving human well-being
  • promoting fairness and justice
  • protecting privacy and security
  • ensuring controllability and credibility
  • strengthening responsibility and
  • improving ethical literacy.

AI-activities also need to comply with a range of management, R&D, supply, and use standards.

There are also a number of rules governing automated decision-making in China’s federal data privacy law, the Personal Information Protection Law (2021), aimed at ensuring transparency and fairness.

In March 2022, the Internet Information Service Algorithmic Recommendation Management Provisions came into effect, requiring providers of AI-based personalised recommendations in mobile applications to uphold user rights, including protecting minors from harm, allowing users to select or delete tags about their personal characteristics, and the give users the right to switch off algorithmic recommendation services.

In November 2022, the Cyberspace Administration of China (CAC), Ministry of Industry and Information Technology, and the Ministry of Public Security jointly issued new regulations on the Administration of Deep Synthesis of Internet-based Information Services. These new rules placed significant restrictions on deep synthesis technology such as deep fakes and other AI-generated media, including the requirement to add labels or tags on content created by AI-generated media.

The regulation of AI in China is also being enacted on a provincial and local level. For example, in September 2022, Shanghai passed China’s first provincial-level law addressing AI development, the Shanghai Regulations on Promoting the Development of the AI Industry. These regulations introduce a graded management system and enforce a ‘sandbox’ that provides space for businesses to explore and test their technologies. At the same time, Shenzhen passed its own Shenzhen Special Economic Zone Artificial Intelligence Industry Promotion Regulations.

Recently, the CAC released draft measures for managing generative AI services, which include a requirement for companies to submit to a security assessment before launching generative AI tools and align those tools with PRC values.  The CAC has said that providers will be responsible for avoiding discrimination when designing algorithms and training data and content generated should be true and accurate. The public can comment on the proposed measures until 10 May 2023.

SINGAPORE

AI has been flagged by the Singapore Government as one of four frontier technologies that are key for development in Singapore and in 2017 AI Singapore was launched to build national AI capabilities, by bringing together Singapore-based research institutions, AI start-ups, and companies developing AI products to support AI R&D and develop local talent.

Singapore, as with many other nations, does not have a comprehensive AI-specific law or regulator and currently relies on existing legislation, common law, regulators, and new national guidelines. 

In 2018, Singapore established the Advisory Council on the Ethical Use of AI and Data in order to:

  • advise the government on ethical, policy and governance issues arising from the use of data-driven technologies in the private sector and
  • support the Government in providing general guidance to businesses to minimise ethical, governance and sustainability risks, and to mitigate adverse impact on consumers from the use of data-driven technologies

In 2019, Singapore launched its National Artificial Intelligence Strategy aimed at

  • identifying areas to focus attention and resources on at a national level
  • laying out how the government, companies, and researchers can work together to realise the positive impact from AI
  • addressing areas where attention is needed to manage change and/or manage new forms of risks that emerge when AI becomes more pervasive.

In 2021, the government launched two new national AI programmes:

  • the National AI Programme in Government, which aims to further advance Government’s digital transformation efforts.
  • the National AI Programme in Finance, which aims to develop Singapore into a global hub for financial institutions to research, develop, and deploy AI solutions.

Various regulatory agencies have also issued guidance and policy papers on AI, for example:

  • In 2018, the Monetary Authority of Singapore (MAS) released its Principles to Promote Fairness, Ethics, Accountability and Transparency (FEAT) in the Use of Artificial Intelligence and Data Analytics in Singapore’s Financial Sector that lists a set of generally accepted principles for the use of artificial intelligence and data analytics.
  • In 2021, MAS launched the Veritas Initiative, which aims to enable financial institutions to evaluate their AIDA-driven solutions against the principles of fairness, ethics, accountability and transparency. In February 2022, the MAS-led Veritas Consortium, comprising 27 industry organisations, released five white papers detailing assessment methodologies for the Fairness, Ethics, Accountability and Transparency (FEAT) principles, to guide the responsible use of AI by financial institutions.
  • In 2019, the Personal Data Protection Commission (PDCP) released a Model AI Governance Framework, quickly followed by an updated second edition in 2020, that provides detailed and readily implementable guidance to private sector organisations to address key ethical and governance issues when deploying AI solutions. The PDPC has also issued an Implementation and Self-Assessment Guide for Organisations and a Compendium of Use Cases Volume 1 and Volume 2 that demonstrates how local and international organisations across different sectors and sizes implemented or aligned their AI governance practices with all sections of the Model Framework.
  • In 2020, the Infocomm Media Development Authority (IMDA) and the PDPC released A Guide to Job Redesign in the Age of AI to assist organisations and employees in understanding  how existing job roles can be redesigned to harness the potential of AI, so that the value of their work is increased.
  • In May 2022, IMDA/PDPC launched AI Verify, the world’s first AI Governance Testing Framework and Toolkit for companies that wish to demonstrate responsible AI in an objective and verifiable manner.

Some new legislation supporting Singapore’s AI efforts has been enacted in Singapore, such as the Road Traffic (Autonomous Vehicles) Rules 2017 and amendments to their Road Traffic Act 1961 that facilitate and regulate trials of AI-driven autonomous vehicles, the Cybersecurity Act 2018, and the Protection From Online Falsehoods and Manipulation Act 2019.

Japan

In 2019, the Japanese government published the Social Principles of Human-Centric AI as principles for implementing AI in society. This document set out seven principles with respect to the development and use of AI, namely, privacy protection, ensuring security, fair competition, fairness, accountability, and transparency, innovation, the human-centric principle, and the principle of education/literacy. Japan’s approach to AI is based on these social principles.

Like the UK and the US, Japan has no comprehensive legislation regulating AI, instead relying on non-binding guidance, including:

Existing legislation and civil law can also be relied upon to constrain AI-activities, such as:

GLOBAL GUIDANCE

While not having the legal capacity to regulate AI directly, the following global and regional institutions have also started providing guidance on AI and provided forums for discussions on the ethics and regulation of AI more broadly. These institutions will likely influence new regulation and regulatory agencies around the world.

United Nations and UNESCO

In 2021, the 193 Member States at UNESCO’s General Conference adopted the Recommendation on the Ethics of Artificial Intelligence (REAI), the first global standard-setting instrument on the subject. Its aim is to encourage international and national policies and regulatory frameworks that focus on the respect, protection and promotion of human rights, and ensure that AI benefits humanity as a whole.

The REAI notes the positives of AI, such as its assistance in developing COVID-19 vaccines and treatments, while warning about its potential risks, such as the misuse of AI and the potential deepening of technological and economic divides globally, and calls for legislative gaps to be filled in at national level.

Other UN entities, such as the UNICRI Centre for AI and Robotics in conjunction with INTERPOL, have also begun to promote and discuss aspects of AI regulation and policy, in this case, in the criminal justice space. UNESCO has also developed a global online course on AI and the Rule of Law, with the aim to engage judicial bodies around AI’s application and impact on the rule of law.

The UN has also established AI For Good, a digital platform where ‘AI innovators and problem owners learn, build and connect to identify practical AI solutions to advance the UN Sustainable Development Goals.’ It recently released its 2022 report here.

OECD

The OECD launched the Global Partnership on Artificial Intelligence (GPAI) in 2020, which called for AI to be developed in accordance with human rights and democratic values in order to ensure public confidence and trust in the technology, as outlined in the OECD Principles on Artificial Intelligence. The GPAI is currently focussed on four key areas: responsible AI, data governance, the future of work and innovation, and the commercialisation of AI.

World Economic Forum

The World Economic Forum has also weighed in to the debate on AI committing to ‘helping ensure that these systems emphasise privacy and accountability, and foster equality and inclusion’, calling for organisations to commit to the development of responsible AI, and releasing its recommended AI Government Procurement Guidelines.

The future of AI will depend on the regulation of AI.  Governments and organisations around the world are discussing what that regulation should involve and some are moving to implement specific regulation of AI.  Some common themes are emerging, including requirements for fairness, accountability and transparency around the use of AI. 

We will continue to monitor key developments in the regulation of AI, particularly as Australia develops its own approach.

LATEST THINKING
Insight
With sophisticated investors quickly seeking diversification in response to geopolitical risk, Asia Pacific markets are well-positioned to become an attractive hedge.

17 April 2025

Insight
Australia and the Asia Pacific Region emerge as a hotbed for data centre investment, as the AI revolution and resulting demand for digital infrastructure surges.

17 April 2025

Insight
A short primer on the different approaches being taken to financial covenants in leveraged finance deals

17 April 2025