Insight,

Navigating the ethics of AI – What questions should you be asking?

AU | EN
Current site :    AU   |   EN
Australia
Belgium
China
China Hong Kong SAR
Germany
Italy
Japan
Singapore
Spain
UAE
United Kingdom
United States
Global

Written by Kate Creighton-Selvay, Bryony Evans and Clea Denham

The increased use of Artificial Intelligence raises serious questions: Can – or should – we automate decision making?

How should we assess the commercial advantages and reputational risks that accompany AI?  How will we establish, operationalise, and uphold AI ethics principles?  In this article, we take a look at some of the key questions organisations developing, deploying or acquiring AI should be asking.

Questions to ask – a snapshot

  • Is this AI? AI is not a term of art – broadly speaking it’s a combination of data and technology that seeks to emulate the decision making that humans do naturally; but it is a concept that should be considered for each use case.
  • Should we use AI for this use case? AI should be used mindfully, and only where it makes most sense. This involves understanding the purpose, capabilities and limitations of an AI system.
  • What does our impact analysis tell us? To maximise the advantages of AI and identify potential risks with AI for a given use case, companies should conduct an impact analysis from the infancy of a project (e.g at the time of investment) and at various stages throughout its life cycle. It’s important to analyse the specific context for each AI use case to be able to determine its risk profile (including risks such as use of personal information of individuals).
  • How are we operationalising our ethical AI principles? Responsible AI strategies are critical to managing the use of AI and reputational implications. AI ethics principles alone are not sufficient – they are not self-executing. Companies need to proactively establish and operationalise rules that underpin each principle.
  • What are our ethical AI governance processes? Successful use of AI is tied to robust governance processes that take into account relevant legal, risk and ethical issues. This requires an effective framework for making decisions relating to the use of AI and which is triggered at the key points during consideration of an AI project.
  • What regulation applies to these activities? For now, in Australia there is no AI-specific regulation, but earlier this year a roadmap for responsible innovation was tabled in Parliament and developments overseas suggest this is a space to watch. In the meantime, regulators have turned their focus to existing laws to provide protection in the AI space (including for example consumer protection laws, privacy laws and discrimination laws), so organisations should think carefully about whether use of AI complies with existing laws.

What is AI?

There is no consensus on the definition of AI. It is best to think of AI not as a “term of art”, but rather as an umbrella term with no rigid boundaries. At its core, AI is a combination of data and technology that seeks to emulate the decision making that humans do naturally. In broad terms, AI has two key inputs: the decision-making algorithm and the data. The hallmark of AI is that it is a dynamic, iterative system that is designed to change and refine over time.

Ask:

  • How will the algorithm be used?
  • What data is used for input?
  • what is the degree of automation?
  • how can transparency be provided on how the algorithm works?
  • When and how should we use AI?

While AI technology is being adopted across sectors at an accelerating pace, it is important to consider when it should be used.

It is well-known that AI has enormous positive potential with great promise to deal with the grand challenges society faces. However, it also introduces significant and novel challenges. It is important to be aware of the complexities and risks which may arise before deciding to use AI.

As a broad rule, companies should only use AI mindfully and where it makes most sense. For example, AI will be most effective when implemented to execute binary decisions.  An AI system is less likely to be helpful with issues where there are a number of possible outcomes with no one objectively correct answer.

Before deciding to use AI, and from a project’s infancy and at strategic points through the project, companies should conduct an AI impact analysis. This involves understanding the purpose, capabilities and limitations of the AI system and analysing the context in which it will be used.

An AI impact analysis also involves considering the accompanying socio-technical risks. For example, can this tech exacerbate harms on society?  How does this AI change the affordances and interactions between humans and machines?  How will this change the status quo relative to the non-AI powered world?

In thinking about how AI will be used, it is important to think about the extent to which there are, or should be, limits on the use of the AI technology. For example, suppliers of AI technology - and their customers - should consider: 

  1. are there technical controls in the system to avoid the AI system being used for other purposes;
  2. what disclosures are made about the limitations of the technology, and how are these reflected in terms and conditions; and
  3. how are AI-specific terms of use provided for, and how will they be enforced.

Each of these considerations should inform if, when and how AI is used. These foundational questions should be baked into the design and embedded in the ultimate decision-making process.

By understanding the AI system’s real risk profile, companies are better placed to use the AI – and realise its benefits. Earlier, deeper discussions will present companies with many more opportunities to maximise the advantages of using AI.

How will we practically implement and operationalise ethical AI principles?

Companies should adopt and implement AI ethics principles at a company-wide level that govern how AI is deployed. These principles may include a commitment to transparency, accountability and fairness. However, principles alone are not enough in this space.

To operationalise AI principles, companies should seek to specifically define rules and methods that teams need to follow to meet and uphold each principle. This crucial step involves translating the principles into practice. These practices need to work at scale and for the particular use case. Here, multidisciplinary perspective and the revision of these rules is key as AI systems, by their nature, change over time. For this reason, companies should seek to implement robust and effective monitoring systems to collect and integrate feedback to ensure the technique successfully achieves the principle or policy objective at all stages of the AI lifecycle.

Governance procedures are also important for companies at all levels of maturity and governance structures are useful mechanisms to navigate the use of AI. For example, do you have senior leadership engagement with the introduction of AI, including at the Board level?  Does using AI fit within existing risk appetites and approaches to corporate social responsibility?  Do you have a risk and governance framework that applies throughout the AI lifecycle?  Should you establish a separate ethics committee to review and consider the use of AI?  These are just some AI-specific considerations from a governance perspective. Regardless of the AI technology, companies need to undertake a deliberative and iterative process to practically operationalise AI principles.

Does AI regulation apply?

For now, in Australia there is no AI-specific legislation. But of course, your use of AI could be the subject of existing regulatory regimes. For example, in October of this year, Australia’s Privacy Commissioner handed down the seminal Clearview AI decision, finding that Clearview AI had breached privacy principles through its use of facial recognition software. And in May of this year, the Australian Human Rights Commission’s (AHRC) roadmap for responsible innovation was tabled in Parliament (“Human Rights and Technology”). By this roadmap, the AHRC made 23 recommendations for the regulation of AI. For example, the AHRC specifically recommended the Australian Government legislate to provide stronger, clearer and more targeted human rights protections regarding the development and use of biometric technologies, including facial recognition. Until this is done, the AHRC encouraged a moratorium on the use of such technologies in high risk areas. Further privacy law reforms were also recommended by the AHRC to protect against the most serious harms associated with biometric technologies.

Given the breadth and complexity of AI and the AI supply chains, it is unclear how AI will be regulated in the future in Australia, including whether it will be specifically regulated or whether it will be brought within the scope of existing laws.

Internationally, AI regulation is far from uniform.

In the EU, the Artificial Intelligence Act (EU AI Act) was released earlier this year for public comment. It sets out horizontal rules for the development and use of AI and takes a risk based approach. Like the GDPR, if enacted, the EU AI Act will have extra-territorial application. In that case, Australian organisations will need to consider whether they are likely to be captured by, and required to comply with, the EU AI Act. This will involve a consideration of whether the company uses an ‘AI’ system (as defined under the legislation) and whether the company is:

  • a ‘provider’ of the AI system within the EU (even if established outside the EU); or
  • a ‘provider’ or ‘user’ of the AI system, which produces outputs used in the EU.

The EU AI Act may also have a broader impact where, for example, multinational companies commit to EU AI Act compliance across their global operations for uniformity purposes, including in Australia.

The EU AI Act is considered to be a pivotal development in the regulation of AI internationally. It heralds likely implications for privacy and data protection as well as AI regulation outside the EU.

AI regulation is also occurring in parallel in the US, with NIST developing an Artificial Intelligence Risk Management Framework and the White House Agency consulting on an AI bill of rights.

As the global regulatory landscape continues to evolve, it will be important for companies to monitor the introduction of new regulatory regimes, the application of existing regulatory regimes to AI use cases, and recommendations and guidance on best practice.

At KWM’s inaugural Digital Future Summit, KWM partners Kirsten Bowe and Patrick Gunning discussed the big questions around what we can do, versus what we should do when it comes to the power of AI with Natasha Crampton – Microsoft’s Chief Responsible AI Officer – and Edward Santow – Industry Professor, Responsible Technology, University of Technology Sydney. This article draws on the panel’s discussion of navigating the ethics of AI. Watch a recording of the panel discussion.

LATEST THINKING
Insight
In a surprise move, the Federal Government has now reached agreement with key crossbench senators, David Pocock and Jacquie Lambie, to divide the Closing Loopholes Bill into two.

08 December 2023

Insight
In an unexpected turn of events, the Federal Government’s Nature Repair Market legislation – which had stalled in the Senate and recently been deferred until at least April 2024 – has now passed after the Government struck a last-minute deal with the Greens.

07 December 2023

Insight
The Therapeutic Goods Administration (TGA) recently released its fifth annual report on compliance with regulations that apply to the advertising of therapeutic goods for FY 2022-23 (Report).

06 December 2023