Tell me in two minutes
- US President Joe Biden issued an Executive Order encompassing a wide array of AI-related topics, including future reporting obligations for private companies developing certain AI models
- Two letters from members of the AI community raised concerns that the Executive Order and other potential regulations could damage open research and the open source community, favouring incumbents instead.
- The UK AI Safety Summit produced a largely symbolic Bletchley Declaration, but also some more concrete statements indicating international support for researching the risks and collaborating in the testing of frontier AI systems.
- The G7 released an 11-point voluntary code of conduct for organizations developing “the most advanced AI systems”.
- There has been a recent international push to address the risk of AI in the military context, including a UN General Assembly resolution raising concerns about the use of lethal autonomous weapons, a US-led declaration on responsible use of AI in military applications, and a number of recent statements by the US on the use of AI in the context of nuclear weapons.
Artificial intelligence is generating a lot of text
Recent developments in the field of artificial intelligence have been staggering. Perhaps the only thing in human history that has experienced greater growth is the number of AI-related summits, statements, letters, declarations and regulations - resulting in a tech lawyer’s version of the “productivity paradox”.
In this Insight, we set out some of the more notable developments that have occurred over the past two weeks.
The NIST Framework for Improving Critical Infrastructure Cybersecurity is one of the frameworks that is currently specified in section 8(4) of the Security of Critical Infrastructure (Critical infrastructure risk management program) Rules (LIN 23/006) 2023. These rules set out the required frameworks that risk management plans that critical infrastructure assets must comply with under the Security of Critical Infrastructure Act 2018 (Cth).
The US Executive Order
On 30 October 2023, United States President, Joe Biden, issued a landmark Executive Order (EO) on Safe, Secure, and Trustworthy Artificial Intelligence.
The 36-page Executive Order deals with a wide range of matters relating to AI and, despite being primarily a directive towards the executive arms of the US government, also imposes a number of obligations on private companies (through the use of existing powers under the US Defense Production Act).
Key elements of the Executive Order include:
- policy guidance on responsible AI: a requirement that executive departments and agencies adhere to a series of underlying policy principles when complying with the EO, including that AI should be safe and secure, and risks from AI systems be mitigated before those systems are put to use.
- reporting on large AI models: reporting obligations on companies developing large foundation models that meet certain technical thresholds (which are expected to apply only to future AI models trained using more computing power than OpenAI’s GPT-4, or other large models trained primarily on biological sequence data)
- reporting on large computing clusters: reporting obligations on companies, individuals, or other organizations or entities acquiring, developing or possessing “large-scale computing cluster” that meet certain technical thresholds
- reporting on large computing clusters: reporting obligations on US infrastructure as a service (IaaS) providers in relation to certain transactions that allow foreign actors to train large AI models
- widely available model weights: a requirement for the Secretary of Commerce to submit a report that outlines the potential risks, benefits, other implications, and appropriate policy and regulatory approaches related to dual-use foundation models for which the model weights are widely available
- standards: directives to the National Institute of Standards and Technology (NIST), which is required to develop several standards, reporting requirements and best practices for companies developing and deploying AI systems
The Executive Order also includes numerous directions to various US government departments and agencies on matters such as:
- government use of AI: the procurement and use of AI and personal information by government (including in policing, criminal justice and welfare contexts)
- risks of AI: the potential risks of AI (eg, national security, misinformation intellectual property, competition, labour)
- specific industry concerns: the use of AI in specific industries (eg, healthcare, education, transport)
- US leadership: the desire for the US to take international leadership in AI matters, including in light of the potential impact AI could have on US critical infrastructure.
While the Executive Order largely impacts US businesses, there are a few potential impacts on Australia:
- An alternative to the EU AI Act: The Executive Order presents an alternative approach to the EU AI Act. Instead of proposing comprehensive new legislation, the US is largely using existing laws, investing in understanding and monitoring the risks of AI and only imposing reporting obligations. This is partly by necessity, given the limitations of executive power and the difficulty in passing legislation through US Congress. However, it is also consistent with the US’s generally more hands-off approach to tech regulation. The concepts in the Executive Order may also influence the regulatory direction of the Australian government’s review of Safe and Responsible AI in Australia given the Industry and Science Minister Ed Husic’s stated desire to “harmonise where we can, localise where we have to”.
- US standards: The Executive Order also indicates the growing importance of the National Institute of Standards and Technology (NIST), its AI Risk Management Framework and related standards and guidance. Although these are non-binding on private companies, these standards and guidance may increasingly impact procurement decisions of the US government, which could result in these standards and guidance being adopted broadly in the US tech industry. As a result, companies should consider NIST’s standards and guidance as they seem likely to become more embedded in both US and Australian law in the future.[1]
- Regulation of computing power: The Executive Order demonstrates a willingness by the US to regulate computing power (in a similar way as the US has increasingly been willing to influence semi-conductor supply chains). For Australian companies, the immediate impact will be minimal (other than for US IaaS providers requiring some reporting in certain very limited circumstances). However, it is possible that the Executive Order is a first step towards increased regulation on computing power and how AI models are made available, which could have much broader impacts on the global AI industry.
Letters in support of open source and open research
While the US Executive Order has been greeted with relief in some quarters, two recent letters indicate growing concerns that future regulation may inhibit open research and the development of the open source ecosystem.
Joint Statement on AI Safety and Openness
An open letter titled “Joint Statement on AI Safety and Openness” released by Mozilla has been signed by an unlikely coalition of a large number of individuals in academia, cybersecurity, big tech (eg, Meta, Google, Amazon), and emerging AI companies (eg, OpenAI, HuggingFace, Mistral). The statement:
- warns that “quickly rushing towards the wrong kind of regulation can lead to concentrations of power in ways that hurt competition and innovation”; and
- argues “open, responsible and transparent approaches will be critical to keeping us safe and secure in the AI era”.
a16z letter
At about the same time, a separate letter was sent to President Joe Biden directly raising concerns with the impact of the Executive Order on the open source ecosystems and was signed by a number of high profile individuals in the tech, AI and venture capital community (including individuals from a16z, Meta, Mistral, Shopify, HuggingFace, Perplexity). This letter:
- pointed to the benefits of open model weights (and openness generally) in terms of promoting transparency, competition, security and research; and
- raised concerns that the Executive Order will increase barriers to entry in favour of incumbents and allow “several large tech corporations to capture the market”.
The issues of open research, open source and innovation will be relevant to the regulation of AI in Australia and, given the Australian government’s desire to promote Australian innovation, we consider it important that the government does not rush to impose burdensome regulation without properly considering the concerns raised in these letters.
The UK AI Safety Summit
On 1 November to 2 November 2023, the UK hosted an international AI Safety Summit at Bletchley Park in Buckinghamshire, which was attended by a number of world leaders, AI experts and executives from leading AI companies. The primary focus of the Summit was to consider the risks associated with AI and discuss how they can be mitigated through international coordinated action.
While there were many criticisms of the Summit (including in relation to the focus of the Summit, the attendees and some of the political theatre), it nonetheless resulted in three key substantive (though largely symbolic) outcomes:
Bletchley Declaration
The Summit concluded with a declaration in which 28 governments (including Australia, the United States, the European Union and China) made a number of high-level statements about developing and deploying AI in a human-centric and responsible way. The declaration included statements:
- recognising both the “transformative opportunities of AI” and the “necessity and urgency” of considering the broad range of risks involved, including the potential for “serious, even catastrophic, harm”
- affirming the need to deepen understanding of these issues and engage with a broad range of partners (including companies, civil society and academia)
- affirming actors developing frontier AI capabilities “have a particularly strong responsibility for ensuring the safety of these AI systems, including through systems for safety testing, through evaluations, and by other appropriate measures”.
Statement on Safety Testing
A statement issued by the UK Government announced that a number of attendees, including Australia, the United States and the European Union agreed “to collaborate on testing next generation of Artificial Intelligence (AI) models against a range of critical national security, safety and societal risks”.
The Statement announced that these parties agreed to:
- invest in public sector capability for testing and other safety research;
- collaborate with one another and promote consistent approaches in this effort; and
- share the outcomes of these evaluations, where sharing can be done safely, securely and appropriately
The Statement also noted that the UK has launched an AI Safety Institute “to build public sector capability to conduct safety testing and to research AI safety” and that the first milestone of the Institute will be “to build its evaluations process in time to assess the next generation of models, including those which will be deployed next year”.
In support of the Statement, Deputy Prime Minister of Australia Richard Marles is quoted as saying “Australia is pleased to partner with the UK on this important work”.
State of the Science Report
The countries represented at the Summit also agreed to support the development of a new ‘State of the Science’ Report on the capabilities and risks of frontier AI. The report:
- will “summarise the best of existing research and identify areas of research priority, providing a synthesis of the existing knowledge of frontier AI risks”
- is intended “to inform both international and domestic policy making”
- but will not make policy or regulatory recommendations.
The report will be published before the next AI Safety Summit to be hosted by South Korea in approximately six months.
Impact of the UK AI Summit on Australia
None of these developments will have any immediate impact on Australian businesses. For example, the Bletchley Declaration is high-level and gives no indication about the form that any regulation could take (eg, whether it is voluntary, or comprehensive legislation, or somewhere in between).
Nonetheless, the Summit adds to the momentum for AI regulation both internationally and in Australia. In an interview a few days after the event, the Federal Industry and Science Minister Ed Husic stated “I've previously said that the days of self-regulation for technology are over, and this summit confirms it”, describing the Summit as indicating a broad agreement by governments “that we can't just leave the companies to themselves on this, or as it's been described, that they mark their own homework”.
There will also be practical impacts as a result of the proposed collaboration in research and testing, which the Minister stated will need to be factored into the Australian government’s proposed approach to AI.
International Code of Conduct for Organizations Developing Advanced AI Systems
On 30 October 2023, the G7 (consisting of Canada, France, Germany, Italy, Japan, Britain, the United States, the European Union) released an 11-point voluntary code of conduct for organizations developing “the most advanced AI systems”. This was a result of the Hiroshima AI process established at the G7 Summit in May, aimed at promoting international regulations for advanced AI systems.
It calls on organisations to take a series of actions, including in relation to the following matters:
- risk management: identifying and mitigating risks across the AI lifecycle (including risks in relation to chemical, biological, radiological, and nuclear risks, offensive cyber capabilities, self-replication, health and safety, human rights, and societal risks);
- transparency: publicly reporting advanced AI systems’ capabilities, limitations and domains of appropriate and inappropriate use;
- governance and policies: developing, implementing and disclosing AI governance and risk management policies, grounded in a risk-based approach – including privacy policies, and mitigation measures;
- security: implementing robust security controls, including physical security, cybersecurity and insider threat safeguards across the AI lifecycle;
- authentication and provenance: deploying reliable content authentication and provenance mechanisms, such as watermarking; and
- Intellectual property and privacy: implementing appropriate data input measures and protections for personal data and intellectual property.
The Code of Conduct acknowledges that different jurisdictions “may take their own unique approaches to implementing these actions in different ways”, but that organisations should follow the code “in line with a risk-based approach, while governments develop more enduring and/or detailed governance and regulatory approaches.”
The immediate practical impact of the Code may be limited. However, like the UK’s AI Safety Summit and the US Executive Order, it:
- demonstrates a growing international consensus on the need to address AI-related risks, particularly around national security issues; and
- promotes the idea of regulating “the most advanced AI systems” (sometimes called frontier AI systems) separately from other AI systems (even when those other AI systems may be employed in extremely high risk applications).
Use of AI in the defence sector
There have also been a number of recent developments relevant to the use of AI in the defence sector. While none of these developments have resulted in any enforceable law in Australia, the increased support for responsible AI in military contexts by Australia and its key allies such as the United States is likely to have practical impacts on suppliers of AI-related defence systems and on the procurement decisions of the Commonwealth.
UN Resolution on lethal autonomous weapon systems
In the shadow of the AI Summit, the UN General Assembly passed its first resolution on lethal autonomous weapon systems. The resolution:
- stressed “the urgent need for the international community to address the challenges and concerns raised by autonomous weapons systems”; and
- requested that “the Secretary-General prepare a report on lethal autonomous weapons systems “on ways to address the related challenges and concerns they raise from humanitarian, legal, security, technological and ethical perspectives and on the role of humans in the use of force”
The resolution passed with 164 states in favour and 5 against (Belarus, India, Mali, Niger, Russian Federation), with 8 abstentions (China, Democratic People’s Republic of Korea, Iran, Israel, Saudi Arabia, Syria, Türkiye, United Arab Emirates). Although the Resolution might not have an immediate practical impact, it demonstrates the serious concerns the international community (including countries such as the United States, the United Kingdom and Australia) has about lethal autonomous weapon systems.
Political declaration on Responsible Military Use of Artificial Intelligence and Autonomy
On the same day, 31 nations (including the US, UK, Canada, Australia, Germany, and France) signed a Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy, covering issues such as compliance with international humanitarian law, human oversight, bias, transparency and safety.
In this declaration, countries such as Australia have committed to:
- implement these measures when developing, deploying, or using military AI capabilities, including those enabling autonomous functions and systems
- make public their commitment to this Declaration and release appropriate information regarding their implementation of these measures
Although the military use of AI is outside the scope of the recent “Safe and responsible AI in Australia” discussion paper, the Australian government has stated that it “will continue to work with our international partners to ensure future developments in AI are in line with these principles”.
US Department of Defense position on AI
In a separate speech on 2 November about the ‘The State of AI in the Department of Defense’, the US Deputy Secretary of Defense Kathleen H. Hicks reaffirmed the US’s position that in relation to AI in weapons and stating:
- “There is always a human responsible for the use of force. Full stop.”
- The US will "maintain human ‘in the loop’ for all actions critical to informing and executing decisions by the President to initiate or terminate nuclear weapons.”
While this may give some comfort to those of us worried that ChatGPT-style hallucinations might result in a nuclear apocalypse, it will also be necessary to implement more practical measures to militate against the phenomenon of “automation bias” (an issue that was noted in the Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy). These statements also seem slightly at odds with Secretary of State Antony Blinken’s recent comment that “we believe that artificial intelligence should not be in the loop or making the decisions about how and when a nuclear weapon is used”.
What is on the horizon?
It seems unlikely that artificial intelligence will “stop generating” new text any time soon. Three developments that we will be watching closely are:
- the Commonwealth response to the submissions made to the “Safe and responsible AI in Australia” discussion paper (which is reported as being expected by the end of the year)
- the negotiations of the draft EU AI Act (which at the time of writing are reported to have reached a deadlock over proposed regulation of ‘foundation models’)
- a rumoured agreement between the US and China on autonomous weapons.
To follow our updates, subscribe by selecting ‘Tech & Data’ as your area of interest here.