Insight,

Australian Government Interim Response on the regulation of AI: inching towards Safe and Responsible AI

AU | EN
Current site :    AU   |   EN
Australia
China
China Hong Kong SAR
Japan
Singapore
United States
Global

Tell me in one minute

On 17 January 2024, the Australian Government released its Interim Response to the 2023 “Safe and Responsible AI in Australia” consultation. It concludes that:

  • Australia’s existing laws do not adequately address the risks presented by AI
  • the Government will seek to regulate AI (rather than relying solely upon voluntary commitments). The Government has not yet decided whether it will regulate via amendments to existing laws or an alternative approach: this will be decided following further consultation
  • the Government will take a risk-based, technology-neutral approach to regulating AI

The interim response indicates that Australia is still taking a slow and steady approach to AI regulation. Australian companies developing or deploying AI systems, or considering doing so, should be mindful that regulation is coming, particularly in “high risk contexts”, and watch these developments closely.

Quick recap: the Safe and Responsible AI consultation

In June 2023, the Government held a public consultation on the “Safe and Responsible AI in Australia”. The Government’s consultation paper explored what governance mechanisms Australia should introduce to ensure AI is developed and used safely and responsibly in Australia. The questions asked of respondents focused on a range of topics including:

  • whether Australia’s existing laws address the risks presented by AI
  • how Australia should regulate AI
  • whether Australia should adopt a risk-based approach to regulating AI (and, if so, what this should entail)

Over 500 responses were received including from technology companies, business, academia and individuals, which reflects the high level of the interest the public has in AI regulation. KWM’s response can be found here.

A key overarching theme in the responses was that the Government needs to do more to ensure that the development and deployment of AI is safe and responsible. There was, however, much divergence in how this should be achieved.

Key takeaways from the new Interim Response

The Government’s interim response that has just been released is light on detail but provides valuable insights as to the future direction of AI regulation in Australia. It is clear that the Government considers that:

  • Australia’s existing laws do not adequately address the risks presented by AI, particularly the high-risk applications of AI and frontier models
  • the Government will seek to regulate AI in ‘high-risk’ settings (rather than solely relying upon voluntary commitments). Further consultations will be undertaken to determine whether mandatory regulation will be via amendments to existing laws or an alternative approach. Any regulatory response will be developed with global interoperability in mind
  • the Government will take a risk-based, technology-neutral approach to regulating AI

Unfortunately, we don’t have further detail at this stage as to how the Government will regulate AI (in substance or in form). Rather, the Government has indicated that it will reach out to industry to further explore this question and that any response will be guided by a number of principles which will involve the Government:

  • adopting a risk-based approach
  • aiming for a balanced and proportionate position
  • acting in collaborative and transparent manner in developing its response
  • acting as a trusted international partner by supporting global action to address AI risks and
  • placing ‘people and communities at the centre when developing and implementing its regulatory approaches’

We have summarised the key takeaways from the Interim Response below.

TAKEAWAY FROM THE INTERIM RESPONSE
OUR COMMENTARY
Example uses 2

Australia’s current regulatory framework does not sufficiently address risks presented by AI (particularly the high-risk applications of AI in legitimate settings and frontier models)

It is well accepted that the development, deployment and use of AI gives rise to a large range of potential risks to individual, businesses, society and the environment. The Government has categorised the risks of AI into technical risks, unpredictability and opacity, contextual risks, systemic risks and unforeseen risks.

Some of these risks will be addressed in upcoming legal reform, including the Privacy Act reforms, the eSafety Commissioner’s consultations on the Online Safety (Basic Online Safety Expectations) Determination 2022 and draft Online Safety (Designated Internet Services – Class 1A and 1B Material) Industry Standard 2024), and changes to the competition and consumer laws to address the digital platforms and the Cyber Security Strategy.

However, the Government has indicated that there will need to be more work done to adequately prevent AI-facilitated harms before they occur and that they will consult on this. This work is (at least at first) likely to focus on the existing legislative frameworks identified in submissions as needing to be updated or clarified for AI, including:

  • competition and consumer law – to address whether individuals who generate deepfakes using AI can be liable for misleading and deceptive conduct
  • health and privacy laws – to address where health and care organisations and practitioners use AI in a way that causes clinical safety risks
  • copyright law – to address whether creative content can be used to train generative AI models, including remedies for infringement

Importantly, the Government has recognised that many applications of AI do not require a regulatory response (such as AI to help automate internal business processes). Accordingly, any future reform will be designed to ensure that AI in low-risk settings can flourish largely unimpeded.

The Government will consider developing mandatory safeguards/guardrails for the development and deployment of AI in ‘legitimate, high-risk settings’. The use of AI in low-risk settings will be able to flourish largely unimpeded

In light of the risks presented by AI, the submissions indicated broad agreement that the Australian government cannot rely on voluntary guardrails and that mandated legislative requirements will be required in some circumstances. This is reflective of a growing international trend away from voluntary AI commitments (with the exception of Singapore) and towards a tiered regulatory approach where developers or deployers of certain AI systems must comply with mandated obligations.

Although we know the Government is considering introducing mandatory guardrails for the development and deployment of AI in legitimate, high-risk settings, what this will actually look like is not yet clear. At most, the Government has indicated its focus will be on ensuring there are testing, transparency and accountability obligations in high risk settings.

To determine the impact of these mandatory guardrails, key questions need to be addressed by the Government including:

  • what AI systems will be captured by these guardrails
  • what is meant by high-risk settings (the Government has suggested it will, at least, capture AI applications to predict a person’s likelihood of recidivism, suitability for a job, or in enabling a self-driving vehicle)
  • what harms will be addressed
  • to whom will the obligations apply (in particular, how will they apply to developers and deployers of AI)
  • whether there will be exceptions
  • whether any AI systems be banned in Australia (for example Europe is moving towards banning of AI systems presenting an unacceptable risk)
  • how will the guardrails be implemented (the Government has indicated the primary options are via amendments to existing laws or an alternative approach, such as horizontal or vertical AI-specific regulation)

Moving forward, the Government will conduct further consultation on the guardrails (including how they will be implemented). Notably, the Government has suggested it will look to leverage existing requirements where mandatory guardrails for high-risk settings already exist.

Specific obligations will be considered for the development, deployment and use of frontier or general-purpose models

As part of developing mandatory guardrails, the Government is also considering whether to include specific obligations for:

  • “Frontier AI” (a focus of the recent UK AI Safety Summit, Frontier AI generally refers to the most capable AI models)
  • General-purpose models (these are AI models, like large language models, that have a wide range of potential downstream uses)

What these obligations could be remains to be seen, although international examples can be found in the reporting obligations for highly capable foundation models in the US Executive Order on AI and in the regulation of general-purpose models in Europe’s proposed AI Act.

A voluntary AI Safety Standard is being developed, implementing risk-based guardrails for industry

Since the introduction of the Australian AI Ethics Principles in 2019, there has been an increasing push for Australian business to introduce governance measures to responsibly develop and deploy AI.

The Government has indicated that the National AI Centre will work with industry to draw together the multitude of Australian and international responsible AI principles, guidelines and frameworks to produce a best-practice and up-to-date voluntary AI risk-based safety framework for responsible adoption of AI in Australian businesses.

Unfortunately, the Government has not indicated which principles/guidelines/frameworks will be in scope. We suspect it will include (at a minimum) the Australian AI Ethics Principles, the OECD Principles on Artificial Intelligence, the recent Hiroshima Process International Guiding Principles for Advanced AI system and the NIST AI Framework.

Options for voluntary labelling and watermarking of AI-generated materials is under consideration

It is increasingly difficult to differentiate human-generated content from AI generated synthetic content (such as deep fakes). This fuels the spread of misinformation and misleading content. Introducing traceability and mandating transparency can help mitigate these issues.

The Government has not provided any details of how it will approach watermarking, other than to say that it will work with industry to explore the merits of voluntary labelling and watermarking in high-risk settings. This not unexpected with companies such as Google, Microsoft and Meta already exploring options for watermarking.

It is important, however, to note that the Government’s proposal is only for voluntary measures (and so seems unlikely to apply to all AI products) and that current watermarking and AI detection techniques are imperfect (especially in the context of AI generated text).

A temporary expert advisory body will be established to support the development of options for further AI guardrails

This body will be an interim expert advisory group established by the Department of Industry, Science and Resources. The Government has indicated that a more permanent body may be established.

What should you do now?

Australia is still only at the beginning of its journey to regulate AI. While the Government is developing the mandatory guardrails, and noting the expected passage of Europe’s proposed AI Act in the coming months, it is important that all Australian companies actively consider the legal and governance implications of AI systems they are developing and deploying.

This not only means ensuring your voice is heard in upcoming consultations but, given the time needed to implement these regulatory changes and the business drivers around harnessing the benefits of AI now, considering how you should develop and deploy AI systems safely and responsibly before any regulatory changes come into force.

In particular, we recommend that companies:

  • Establish a risk-based AI governance framework in place that defines acceptable risk levels for your organisation. It should also include processes (e.g. AI Impact Assessments) to assess the risks posed by higher risk AI systems throughout their lifecycle. Review it for consistency against the voluntary AI Safety Standard when that is eventually released.
  • Establish guardrails to ensure that AI is not used for higher-risk activities without appropriate consideration of the risks and appropriate internal authorisation.
  • Implement processes to test AI systems before and after release.
  • Implement processes to ensure that AI systems are audited, and their performance monitored. Do you have documentation about how the AI system works (including what data its trained upon, how it was developed and its output)? If not, how will you manage risks such as inaccuracy or bias arising out of the AI system?
  • Consider the circumstances in which you will inform users (both internal and external) when an AI system is being used or when content presented to them is generated by AI. Do you need to produce instructions to enable users to interpret the output and use the AI system appropriately?
  • Ensure that there is appropriate human oversight of the AI system while it is in use, particularly if there is a risk that decisions or recommendations made by the AI system could materially impact individuals or organisations.
  • Establish training to ensure your employees understand the risks of AI.
  • Ensure that there are appropriate cybersecurity protections around the AI system.
LATEST THINKING
Insight
The incumbent Australian Labor Party (ALP) has been re-elected to a second consecutive term in office. While all races are yet to be formally declared, the ALP is set to have more seats than at any point since its establishment, and will likely face a materially less fractured Senate, no longer having to rely on patching together support from a diverse group of independents in order to pass legislation.

12 May 2025

Insight
As the post-election dust settles, the KWM team has pulled together a succinct assessment of the Government’s key policy positions, legislative priorities and issues to watch for in the next term of Parliament.

09 May 2025

Insight
A ‘non-poach’ clause is a contractual provision that seeks to restrain the hiring of one party’s employees by the other party to the contract.

09 May 2025