Insight,

KWM Submission - Discussion Paper on Safe and Responsible AI in Australia

GLOBAL | EN
Current site :    GLOBAL   |   EN
Australia
China
China Hong Kong SAR
Japan
Singapore
United States
Global

King & Wood Mallesons recently had the opportunity to provide feedback on the “Safe and responsible AI in Australia: Discussion Paper” (Discussion Paper). We have been working closely with Australian and international companies over the past few years to address the legal and ethical risks presented by the development and deployment of AI systems. Our work has included the development of bespoke AI governance frameworks, the design and execution of internal AI impact assessments and the design and execution of contractual arrangements to operationalise AI systems. Here is a snapshot of our submission.

Summary of KWM’s submission

We think it is crucial for the Commonwealth to exercise caution and refrain from hastily implementing regulation of AI in response to the hype and panic that we have seen during the last few months as companies (and the media) rush to capitalise on the uptake of generative AI (both in Australia and overseas).

While there may be some risks associated with AI, any AI related regulation must be clearly targeted at, and proportionate with, identifiable and serious risks to individuals, society or the environment. This will involve taking into account the varying contexts in which AI systems can be deployed throughout the Australian economy and the need to support and promote innovation by Australian companies.

It will also be important for the Commonwealth to minimise regulation that would impose regulatory burdens unique to Australia. Minimising such regulation will allow Australian businesses to rapidly adopt AI technologies that are available overseas (making them more efficient domestically and more competitive globally) and will ensure that Australian organisations and consumers have access to global advances in AI systems that will be of benefit to them.

In summary, we consider that the Commonwealth should:

  1. Focus its initial energies on reviewing how existing Commonwealth and State legislation already addresses specific risks and harms presented by AI and, if there are gaps, consider how that legislation can be amended to address those risks and harms.

  2. Only consider introducing new horizontal (sector-wide) legislation if identified harms presented by AI cannot be adequately addressed in existing legislative or regulatory regimes (or amendments to them).

  3. If new horizontal AI legislation is to be introduced, ensure that it is principles-based legislation that implements a set of AI Governance Principles. We suggest an approach similar to the way in which the Privacy Act 1988 (Cth) (Privacy Act) implements the Australian Privacy Principles. Adopting a principle-based approach will be sector neutral and enable the legislation to be flexible to changes in underlying technology. Such legislation should be combined with specific guidance from an appropriate regulator where that specific use cases for AI would benefit from more details in relation to how the AI Governance Principles apply. The AI Governance Principles should:
    1. promote the establishment of internal mechanisms for organisations to identify whether they are deploying an AI system that will (or is likely to) involve a serious risk of harm to individuals, society or the environment;
    2. require that organisations undertake an AI Impact Assessment against the Australian AI Ethics Principles if they plan to deploy an AI system that will (or is likely to) present a serious risk of harm to individuals, society or the environment; and
    3. require organisations take reasonable steps to mitigate the risks of serious harm to individuals, society or the environment arising from their deployment of any AI system.
  4. Co-ordinate approaches to the regulation of AI through the establishment of a new agency or regulator (or designating an existing agency or regulator) as a centralised point for regulating, coordinating and sharing expertise with other regulators responsible for sector-specific and domain-specific legislation relevant to AI technology.

  5. Not introduce regulation that could inhibit the development and adoption of low risk, open-source AI models in Australia.

We have explored this proposal in greater detail in our responses to the specific questions below. Ultimately, any regulation would need to promote responsible AI development and usage, and foster collaboration and innovation in the field while avoiding undue regulatory burdens.

LATEST THINKING
Insight
The Supreme Court of Queensland’s recent decision (handed down yesterday) in Groupline Constructions Pty Ltd v CDI Lawyers Pty Ltd [2024] QSC 209 underscores the criticality for lawyers to carefully navigate their duties to former clients.

10 September 2024

Insight
On Friday 6 September 2024, the New South Wales Supreme Court handed down its decision in Uber Australia Pty Ltd v Chief Commissioner of State Revenue [2024] NSWSC 1124.

09 September 2024

Insight
Consumer energy resources (CERs) are consumer-owned devices that generate, store, or manage electricity.

05 September 2024