Insight,

AI regulation is coming to Australia: what you need to know

AU | EN
Current site :    AU   |   EN
Australia
China
China Hong Kong SAR
Japan
Singapore
United States
Global

TLDR

  • The Australian Government has released a proposal for the introduction of ten mandatory guardrails (Mandatory Guardrails) that AI developers and deployers must comply with for AI in “high-risk” settings and general purpose AI models.
  • The Mandatory Guardrails are complemented by ten voluntary guardrails in the Voluntary AI Safety Standards (Safety Standards) that are the same as nine of the Mandatory GuardrailsAs the Safety Standards have immediate effect now, organisations who want to comply with best practice in AI Governance and stay ahead of the regulatory curve, should take steps to comply with the Safety Standards now.

Introduction

On 5 September 2024 the Department of Industry, Sciences and Resources (Department) released:

  • the “Proposed Guardrails for the Mandatory Use of AI in High-Risk Settings” (Mandatory Guardrails Paper). This paper proposes that Australia implements 10 mandatory guardrails (Mandatory Guardrails), which AI developers and deployers must comply with for AI in “high-risk settings” and which reflects a risk-based ex ante approach to regulating AI. The Mandatory Guardrails Paper is open for public consultation until 4 October 2024; and
  • Voluntary AI Safety Standards” (Safety Standards), which sets out 10 ‘guardrails designed to provide practical guidance to AI developers and AI deployers on safe and responsible development and deployment of AI systems in Australia. These Safety Standards are voluntary but, with 9 of the 10 standards overlapping with the Mandatory Guardrails, they are a clear indication that the Australian Government expects AI developers and AI deployers to implement AI governance processes today rather than wait for the Mandatory Guardrails to become law. We consider the Safety Standards, when combined with the increasing focus of AI risk management in the context of directors duties (including the recent AICD’s Director’s Guide to AI Governance) represent the new benchmark for what Australian organisations should be doing.

With the Australian Government giving organisations less than a month to comment on the Mandatory Guardrails Paper, the race is on to ensure that organisations are not only reviewing their existing AI Governance structures against the Mandatory Guardrails and the Safety Standards but to make sure their voice is heard on how AI should be regulated in Australia.

This article summarises the key aspects of the Mandatory Guardrails Paper and what you should be doing now.

Will the Mandatory Guardrails apply to me?

The Department has proposed that the Mandatory Guardrails will apply throughout the AI lifecycle to both:

  • developers of AI (defined as organisations or individuals who design, build, train, adapt, or combine AI models and applications); and
  • deployers of AI (defined as any individual or organisation that supplies or uses an AI system to provide a product or service).

This distinction is important as different entities play different roles, and have different levels of control, throughout the AI lifecycle. However it is important to note that the Mandatory Guardrails:

  • contain no territorial nexus. That is – it is unclear whether they will only apply to: developers or deployers based in Australia; AI models and AI systems that are developed or deployed in Australia; or any AI models and AI systems that impact end users in Australia.

As many AI developers are based overseas, and as the Mandatory Guardrails are designed to be met by both developers and deployers, this will have significant impact on the efficacy of the Mandatory Guardrails. Practically, even if the Guardrails do have extraterritorial effect – enforcing the guardrails against overseas developers or deployers could prove to be difficult. It would also be difficult for Australian deployers to enforce them against overseas developers where there is likely to be an imbalance in bargaining power; and

  • do not draw a distinction between internal and external deployment of AI systems. That is – the deployment of an AI system as part of an internal process may be subject to the same obligations as a deployment for external purposes if that AI system is used in order to provide a product or service to end users. Practically, this means that deployers must assess every AI system that they use to determine which ones fall within the scope of the Mandatory Guardrails.

The Department is also seeking feedback on how the Mandatory Guardrails should apply to ‘open source’ models (which could include situations where the developer releases the weights of the model and has limited control over how they are subsequently deployed).  

What AI models/AI systems will the Mandatory Guardrails apply to?

The Department has proposed that the Mandatory Guardrails will apply in two core situations.

  • AI systems in ‘high-risk’ settings. In applying these Mandatory Guardrails to “AI systems” , the Department is making a distinction between an AI model[1] and an AI system[2]. We explore how high risk settings may be determined in our table below.
  • All general purpose AI models (GPAI), currently defined as “an AI model that is capable of being used, or capable of being adapted for use, for a variety of purposes, both for direct use as well as for integration in other systems”. In the Department’s understanding these would include OpenAI’s GPT, DALL-E and Sora.

an AI model refers to the raw, mathematical ‘engine’ of AI applications. In the context of large language models, an AI model will be the combination of the language models architecture and parameters learned through training. We consider that examples of AI models would include OpenAI’s GPT-4o and Claude 3.5 Sonnet (or, more specifically, models known by such names as gpt-4o-2024-08-06 and claude-3-5-sonnet-20240620.)

an AI System is defined as “A machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.” This definition refers to the overall software system into which an AI model is incorporated. We consider examples would include such products as ChatGPT and Meta AI.

IS MY AI COVERED BY THE PROPOSAL?
KWM COMMENTARY
Example uses 2

High Risk AI systems (other than GPAI)

High risk systems are determined based on the following principles:

  1. The risk of adverse impacts to an individual’s rights recognised in Australian human rights law without justification, in addition to Australia’s international human rights law obligations;
  2. The risk of adverse impacts to an individual’s physical or mental health or safety;
  3. The risk of adverse legal effects, defamation or similarly significant effects on an individual;
  4. The risk of adverse impacts to groups of individuals or collective rights of cultural groups;
  5. The risk of adverse impacts to the broader Australian economy, society, environment and rule of law; and
  6. The severity and extent of those adverse impacts outlined in the principles above.

Practically, this approach means that:

  • organisations will need to assess all AI systems to determine if they would be considered ‘high-risk’ in light of the principles;  
  • a broad range of AI systems are likely to be captured depending on, for example, what the data it is trained upon, how the output is used and the impact of that output. This means the same AI system may be high-risk in some contexts and not in others (eg the Department states that facial recognition when used to unlock a personal phone may not be a high-risk use of AI). This is likely to result in a different range of AI being captured than under the European AI Act’s more “list-based” approach;
  • there is currently no mechanism to ensure that low-risk AI is not caught (eg AI that is only used for procedural tasks – although the Department has invited feedback on this issue);
  • organisations will need to consider the impact of the Mandatory Guardrails in relation to potential harms it presents not only to people (eg infringement of civil liberties) but to groups (eg exacerbation of bias) and societal structures (eg undermining electoral processes);
  • it’s not clear to what extent severe impact but low probability, or low impact but high probability, AI systems would be captured; and
  • no AI will be banned in Australia under these Mandatory Guidelines (as opposed to the approach taken in Europe).

Key issues for organisations to consider are the scope of the principles based approach and how the principles will be interpreted by both developers and deployers.

General Purpose AI (GPAI)

GPAI is defined as “an AI model that is capable of being used, or capable of being adapted for use, for a variety of purposes, both for direct use as well as for integration in other systems”.

The Department is seeking feedback on whether the Mandatory Guardrails should apply to all GPAI models and at one point in the Mandatory Guardrails Papers says the Mandatory Guardrails will only apply to “advanced, highly-capable” GPAI models.

This proposal is unlikely to be effective on its current formulation as:

  1. the breadth of the definition (and examples given by the Department) will capture a large range of AI models (including all general purpose language models, image and video generators)  (rather than taking into account the size of the system or how it will be used – eg internal use only for low risk use cases); and
  2. the practical reality is that currently most large GPAI models are trained outside of Australia. This will cause practical issues where the Australian deployer has obligations under the Mandatory Guardrails but the international developer does not.

What are the Mandatory Guardrails?

The below table outlines the proposed 10 Mandatory Guardrails. At a high level, they are designed to:

  • focus on testing (to ensure AI performs as intended during its lifecycle), transparency along the supply chain and accountability for governing and managing the risks of AI;
  • be flexible and adaptable so that they can be tailored to different risk profiles (eg narrow AI models as compared to a GPAI model), different roles in the AI supply chain (ie they should be distributed according to who is best equipped to address risks) and evolving best practice; and
  • operate alongside other laws and regulatory obligations such as the Privacy Act 1988, Copyright Act 1968, Criminal Code Act 1995, Corporations Act 2001, Fair Work Act 2009, the Competition and Consumer Act 2010 and administrative law.

Although the Mandatory Guardrails are aimed at both developers and deployers, many of the guardrails are more within the control of the developers. This may cause practical problems where international developers are not bound by the Mandatory Guardrails. Accordingly – deployers should be paying particular attention to their contracts with developers (something that can’t wait for the laws to be implemented).

MANDATORY GUARDRAIL
KWM COMMENTARY
Example uses 2

1. Establish, implement and publish an accountability process including governance, internal capability and a strategy for regulatory compliance

To the extent that organisations do not already have AI governance frameworks in place – this guardrail requires the implementation of governance policies that outline regulatory compliance; data and risk management; clear lines of accountability and training. Notably – the Mandatory Guardrails Paper proposes that organisations make policies publicly available and accessible.

For the Mandatory Guardrails to be effective, this guardrail must practically be implemented by all developers and deployers in order to enable the identification of which AI models/AI systems will be subject to the Mandatory Guardrails.

Note: The Department has suggested this guardrail is equally applicable to both developers and deployers. 

2. Establish and implement a risk management process to identify and mitigate risks 

As with Guardrail 1 – this should already be a core part of organisations’ AI governance processes. Practically, it involves implementing processes to identify, and mitigate, known or foreseeable risks throughout the AI lifecycle. The involves both AI Impact Assessments of individual AI models/systems and broader AI risk and impact management processes (such as an organisational tolerance for AI use).

Note: The Department has suggested this guardrail is likely to fall more heavily on developers who are better placed to assess the risks and establish mitigation strategies. Deployers will need to be focused on understanding suppliers’ risk management processes and managing the risks for specific use case risks and unforeseen risks.

3. Protect AI systems, and implement data governance measures to manage data quality and provenance

This guardrail seeks to ensure that the data AI systems and AI models are trained on is legally obtained, high quality, reliable, fit-for-purpose, representative and protected.

Practically – this guardrail builds upon the existing data, and data handling/security requirements (including under the Privacy Act 1988; the Copyright Act 1968; and the Security of Critical Infrastructure Act 2018). However, it will likely require uplift of existing policies relating to data usage, IP, Indigenous Data Sovereignty, privacy, confidentiality and cybersecurity.

Note: The Department has suggested this guardrail is likely to fall more heavily on developers although deployers will need to focus on any additional data they input into the AI model.

4. Test AI models and systems to evaluate model performance and monitor the system once deployed 

This guardrail requires organisations to test AI models and AI systems on an ongoing basis to evaluate their performance before and after deployment.

Practically, the Government is looking to align the implementation of this guardrail with measurement methodologies already developed (eg ISO/IEC TR 29119-11:2020 and SA TR ISO/IEC 24027:2022 or in development (ie by NIST).

Note: The Department has suggested this guardrail is likely to fall more heavily on developers. 

5. Enable human control or intervention in an AI system to achieve meaningful human oversight

This requires organisations to ensure that humans can understand high-risk AI, oversee its operation and intervene where necessary across the AI supply chain and throughout the AI lifecycle.

However, some of the requirements of this Mandatory Guardrail (reviewing outputs and reversing decisions) appear to be specifically contemplating automated decision-making so it is unclear whether some of these requirements are intended to apply only to other AI systems that might fall with the scope of high-risk AI (which currently seems to include image generators such as DALL-E).

It is also unclear how these requirement will operate in practice in light of the “black box” problem, which we have written about here.

Note: The Department has suggested this guardrail will fall on developers in relation to ensuring that oversight/intervention can be exercised during deployment. Deployers will need to be focused on equipping people with the knowledge and skills to understand and operate the AI model/AI System.

6. Inform end-users regarding AI-enabled decisions, interactions with AI and AI-generated content 

Where an end-user will be impacted by high-risk AI, the deployers will be required to implement steps to inform end-users when AI is used to make or inform decisions relevant to them and when they are directly interacting with an AI system. Both developers and deployers must also apply best efforts to ensure AI-generated content can be detected as AI generated or manipulated.

Practically – the government has suggested reference could be had to the Coalition for Content Provenance and Authenticity Standards (see https://c2pa.org/here).

Note: Other than to the extent that developers can technically enable AI-generated content to be labelled, the Department has suggested this guardrail will primarily fall on deployers who are interacting with the end-user.

7. Establish processes for people impacted by AI systems to challenge use or outcomes 

Where individuals are negatively impacted by high-risk AI systems, this guardrail will require organisations to ensure that they can contest AI-enabled decisions or make complaints about their experience or treatment.

Practically – this guardrail is closely linked with guardrails 5 and 6. Notably, it does not go as far as requiring companies to offer end-users the ability to opt out of automated processing (or to provide consent for automated processing). 

Note: This guardrail is only applicable to deployers.

8. Be transparent with other organisations across the AI supply chain about data, models and systems to help them effectively address risks 

This guardrail requires organisation to share information about high-risk AI with other participants in the AI lifecycle

Note: The Department has suggested this guardrail will primarily be focused on developers. Deployers are likely to need to notify developers of adverse incidents.

9. Keep and maintain records to allow third parties to assess compliance with guardrails 

This guardrail requires organisations to maintain records (including technical documentation) about high-risk AI systems throughout its lifecycle. These records must be provided to relevant authorities on request.

Practically – the Government has proposed that “this could include the power for regulators to access documents on enforcement and compliance in their regulatory remit”. This has the potential to result in considerable expansion to existing record-keeping and auditing requirements. Note: The Department has suggested this guardrail is applicable to both developers and deployers (although the burden will likely be heavier on developers).

10. Undertake conformity assessments to demonstrate and certify compliance with the guardrails

This guardrail requires organisations to be able to demonstrate that they have adhered to the Mandatory Guardrails for “high-risk AI systems”. This can be undertaken by the developer, by a third party or by government entities/regulators.

A conformity assessment will be required before the high-risk AI system is placed on the market and it will need to be periodically repeated.

Note: The Department has suggested this guardrail falls on the developers before an “AI system is deployed” and deployers once the “AI system is retrained or undergoes changes”. This raises practical questions about the deployer/developer split.

Practical Note: This is the only guardrail that differs between the Mandatory Guardrails and the Safety Standards. In its place, the Safety Standards requires organisations to engage with stakeholders and evaluate their needs and circumstances (with a focus on safety, diversity, inclusion and fairness).

How will the Mandatory Guardrails be implemented?

The Mandatory Guardrails Paper represents a risk-based ex-ante­ approach to regulating AI with a focus on preventative measures aimed at avoiding significant harm before it occurs. Such a precautionary approach is designed to shift industry practises rather than to focus on post-market liability measures. However, the Department has not yet determined how the Mandatory Guardrails should be implemented. They have tabled three options:

  • A domain specific approach - this involves adopting the guardrails within existing regulatory frameworks as needed. Practically, this requires the review and amendment of existing laws to embed the guardrails where they currently do not exist. It will likely be approached on a domain basis and will take the longest to implement. This approach may cause issues for organisations who work across various sectors/regulated activities as it has the potential to result in a staggered (or even different) obligations and is unlikely to apply to developers (as many existing regulatory frameworks are focused on the deployer).
  • A framework approach – this involves introducing new framework legislation to adapt existing regulatory frameworks across the economy. Practically, this is likely to involve one ‘over-arching’ legislative instrument that will rely on amendments to existing laws to enable existing regulators to enforce it. Similar to the domain specific approach – the reliance on existing regulatory frameworks means it is unlikely to apply to developers.
  • A whole of economy approach – this involves introducing a new cross-economy AI-specific Act (for example, an Australian AI Act). Practically, this will involve one ‘over-arching’ legislative instrument that will apply across existing regulatory frameworks. It may also require the imposition of an independent AI regulator (or expanded powers for an existing regulator). Unlike the other approaches, this is the most likely to result in overlap with existing laws.

What should I do now?

The Mandatory Guardrails are currently under consultation, so we don’t expect to see revised Mandatory Guardrails (and the associated legal obligation for developers and deployers to comply with them) until 2025 at the earliest.

However – the Safety Standards duplicate guardrails 1 to 9 of the Mandatory Guardrails and are a clear indication from the Australian Government that deployers and developers should be focusing on AI governance in light of the Safety Standards sooner rather than later. The Safety Standards have been developed as an iterative practical guide for how organisations (especially deployers in this first version) practical can safely and responsibly use and innovate with AI in a manner that is consistent with the proposed mandatory guardrails.

We recommend you:

  • assess whether you want to respond to the Mandatory Guardrails. A key question that is not addressed in the proposal is whether there is an economic benefit of regulating AI today (especially GPAI);; and
  • assess your existing AI Governance frameworks against the Safety Standards. These have already been mapped against Australia’s AI Ethics Principles and broadly align with existing frameworks (such as the ISO 42001:2023 and the NIST AI RMF). Accordingly – if you have already aligned with these principles and frameworks – you are already in a good spot to evolve your AI Governance frameworks. If you do not have an AI Governance framework in place and you are deploying or developing AI – you should get something in place now!

Please do not hesitate to contact the KWM team if you have any questions or whether we might help assist you implement, or uplift, your AI governance framework or draft submissions on the consultation.


Getting lost in the changing landscape of AI regulatory requirements?

View our resources and videos developed by our experts to help you stay on top of the latest AI and tech developments.

Our AI regulatory map will help you to understand and keep up with this fast moving regulatory and stakeholder landscape.

OPEN THE MAP

This easy-to-use and regularly updated timeline will help you stay on top of important developments across key areas of tech-related regulation.

OPEN THE TRACKER

Reference

  • [1]

    an AI model refers to the raw, mathematical ‘engine’ of AI applications. In the context of large language models, an AI model will be the combination of the language models architecture and parameters learned through training. We consider that examples of AI models would include OpenAI’s GPT-4o and Claude 3.5 Sonnet (or, more specifically, models known by such names as gpt-4o-2024-08-06 and claude-3-5-sonnet-20240620.)

  • [2]

    an AI System is defined as “A machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.” This definition refers to the overall software system into which an AI model is incorporated. We consider examples would include such products as ChatGPT and Meta AI.

LATEST THINKING
Insight
The incumbent Australian Labor Party (ALP) has been re-elected to a second consecutive term in office. While all races are yet to be formally declared, the ALP is set to have more seats than at any point since its establishment, and will likely face a materially less fractured Senate, no longer having to rely on patching together support from a diverse group of independents in order to pass legislation.

12 May 2025

Insight
As the post-election dust settles, the KWM team has pulled together a succinct assessment of the Government’s key policy positions, legislative priorities and issues to watch for in the next term of Parliament.

09 May 2025

Insight
A ‘non-poach’ clause is a contractual provision that seeks to restrain the hiring of one party’s employees by the other party to the contract.

09 May 2025