Tell me in two minutes
- Lawyers are starting to embrace generative AI to assist with legal tasks, including legal research, drafting and reviewing contracts and analysing case law.
- However, whenever lawyers use generative AI in the course of their legal duties, they must also ensure that their use complies with their ethical duties.
- Although the current ethical duties that apply to solicitors and barristers under Australian law are technology agnostic, it is important that lawyers do not discount their ethical duties, as they apply equally to the use of generative AI as to any (more traditional) ways of delivering legal services used by lawyers.
- The key ethical duties that Australian solicitors must keep in mind while using generative AI include:
- Acting competently and in the best interests of clients: generative AI tools can improve the quality or efficiency of client outcomes, but a lawyer must always understand the limitations of generative AI when using it for a particular task (including the potential for hallucinations/inaccurate output as demonstrated in the now famous cautionary tale of Mata v Avianca, Inc);
- Not disclosing client confidential information: confidential information of clients should never be inputted into public generative AI systems without the consent of the client (either as training data or as an input). If a lawyer is looking to use generative AI products which are not public versions and are used within a closed system, lawyers must understand the security and data protection features of that AI system (including but not limited to whether any data provided by the lawyer will be disclosed to a third party); and
- Not misleading the court: if legal material is produced (either in whole or part) by generative AI, a lawyer must always validate and confirm the accuracy of the material before submitting it to a court.
What is Generative AI?
Generative AI is much more than just ChatGPT. Broadly speaking, generative AI refers to AI products that can be used to produce new content (such as text, images and video) as opposed to other tasks like classification or data analysis. Common examples include chatbots (e.g. ChatGPT, Microsoft’s Bing), image-generators (e.g. DALL-E 2, Midjourney), other LLMs (e.g. Microsoft Azure OpenAI, Microsoft Copilot) and products (e.g. voice cloning, text-to-music).
When using generative AI, lawyers should always keep the following principles in mind that go to how generative AI systems generally operate:
- as a general rule, generative AI systems rely on statistical models and are not deterministic (nor are they search engines). This means that the output they produce in response to an input (i.e. your question) is based on statistics derived from their training and design and that they can generate different outputs when given the same input multiple times;
- the output of generative AI systems is not simply programmed – it is a product of the data it is trained on, the specific features of the generative AI system (including how it has been tuned, filtered and reinforced) and the prompts that are inputted by the user who is seeking a response;
- the accuracy of a generative AI systems depends on the suitability of the prompts and the system itself. A number of generative AI systems are well known to ‘hallucinate’ (i.e. providing convincing but inaccurate responses) and all generative AI products may produce output that is inaccurate, biased, discriminatory or cause legal issues (e.g. it may breach intellectual property rights or privacy law) depending on the generative AI product that has been designed, what data it has been trained upon and how it has been used; and
- not all generative AI systems are equal - a generative AI system trained on legal-specific source material will produce vastly different outputs compared with a generative AI tool trained on data scraped from the internet (e.g. Reddit).
Whether a generative AI system is suitable for a particular legal use case will depend on a number of factors including:
- how it has been designed;
- what data it has been trained upon;
- whether the inputs are disclosed to a third party; and
- how the model evolves over time.
As a general rule, generative AI tools designed for the public should not be used for client confidential information nor specific legal concepts or questions. As a lawyer, you need to carefully consider whether generative AI tools are suitable for the job at hand.
As a general rule, generative AI tools designed for the public should not be used for client confidential information nor specific legal concepts or questions. As a lawyer, you need to carefully consider whether generative AI tools are suitable for the job at hand.
In July 2023, a report produced in collaboration between Microsoft and the Tech Council of Australia estimated that 10% of a solicitor’s tasks may be automated and 32% could be augmented by generative AI.[1] Example use cases proposed in the report include:[2]
- Automating: searching documents for uses of specific words and locating precedents in historic judgements;
- Creating: generating standard legal documents such as confidentiality deeds, licenses and conveyancing documents;
- Advising: identifying contract loopholes and suggesting solution clauses; and
- Exploring: ‘Horizon scanning’ law changes in other jurisdictions and analysing outcomes to make predictions.
In practice, research from Thomson Reuters reveals that “legal professionals are cautiously optimistic towards large language models” with 40% of firms currently experimenting with generative AI systems.[3]
What does this mean for ethical duties of lawyers?
7 Ellie Dudley, Half of Australian lawyers fear AI will take their jobs, research reveals, The Australian, 23 August 2023.
This fundamental rule applies to the provision of legal services regardless of the technology that a lawyer uses to deliver those services. Where that technology is generative AI, lawyers must consider:
- Accuracy: Depending on the generative AI system being used, it is likely that it will produce a response to your question that looks very convincing but may or may not be accurate. Often dubbed as ‘hallucinations’, convincing but inaccurate responses by generative AI system can be useful when creativity is required but can be problematic in a legal setting.
We are already seeing cases where hallucinated content is being submitted by lawyers to the courts, be that fake citations such as in Zhang v. Chen,[4] or even the fake content of a fake case generated by ChatGPT such as in Mata v. Avianca, Inc. [5]
In these cases, the common thread is that the lawyer in question did not realise that the generative AI systems they used (i.e. the public version of ChatGPT) could produce inaccurate ‘hallucinated’ responses and they did not check the accuracy of the responses. In some cases, the lawyers even admitted that they simply presumed that the generative AI system was a ‘super search engine’. To not check the accuracy of the responses provided by a generative AI system incorporated into legal documents is likely to breach a lawyer’s ethical duty of competency.
Key Takeaway: If you use generative AI as a legal research tool or to generate legal content, you must always critically review, validate and correct the outputs to ensure that it is consistent with your own legal knowledge, experience and research.
- Appropriate tools: Currently, the most well-known generative AI systems on the market are general purpose (e.g., ChatGPT). General purpose AI systems (as opposed to domain-specific systems) are not trained on legal or client specific material, nor are they optimised for use in a legal setting. Although they can be fined-tuned to improve the suitability of their responses, we consider that it is the responsibility of lawyers to only use generative AI systems which are fit for purpose for the legal task that the lawyer is undertaking. That is, the duty of competence requires that a lawyer understands how to use the AI system (e.g. how to best write a prompt to produce the most appropriate output) and the key limitations of the AI system (at least to the extent that those limitations impact on the quality and accuracy of the legal services, e.g. providing legal advice or preparing court documents being provided to the client).
Key Takeaway: You should always consider whether a particular generative AI system is appropriate for the legal task you are using it for. Ideally, you should also consider undertaking training to understand how generative AI systems work and its limitations as relevant to its use in providing legal services.
- Competency: The requirement for competency includes efficiency in delivering client outcomes in a timely and cost-effective manner. Accordingly, although Rule 4 does not require that lawyers have to use generative AI systems, as generative AI systems improve and more legal specific AI systems become available, a new question emerges as to whether the duty of competency encompasses competence with technology. If it did, it would be incumbent on lawyers to use technologies like generative AI where the technology can achieve more efficient or effective outcomes. However, at this stage, we do not consider Rule 4 to extend that far. However, as the use of tools like generative AI continues to evolve, this duty may evolve as well.
More broadly, there is also an open question as to whether lawyers should disclose the use of generative AI in the provision of legal services. The NSW Bar Association has issued guidelines that barristers should be transparent with clients about their use of AI tools (including the nature of the AI tool and known limitations of the use of AI in legal practise).[6] Similarly, the Law Society of NSW has also suggested that solicitors should be transparent as to when they use generative AI.[7] However, we do not consider that being transparent about the use of generative AI tools in advice alone discharges the duty to act competently in delivering advice to clients. Rather, the fundamental duty is to take steps to ensure that the advice is accurate and to validate the outputs of any generative AI system used to prepare advice or other legal documents for clients.
Zhang v. Chen, 2024 BCSC 285 (available here)
Mata v. Avianca, Inc., 22-cv-1461 (PKC) (S.D.N.Y. Jun. 22, 2023) (available here)
The key to generative AI is data – large datasets are required to train the relevant model and then to refine and improve it. For example, it has been reported that OpenAI’s GPT-3 (which is used to power ChatGPT) was trained on 570GB of filtered data (a large amount of which was scraped from the internet).[8] This data was then subject to further refinement and where a user is using the free, public-facing version of ChatGPT and has not turned off chat history, user inputs may be used to train and improve the ChatGPT models.[9] Practically, this means that data (both in the initial training datasets and in user inputted data) is disclosed to third parties and, as a result, is not confidential.
There are a multitude of legal issues in how generative AI systems are trained (especially in relation to intellectual property rights and privacy). However, from an ethical perspective, this practically means that a lawyer should not input any client confidential information into generative AI systems without understanding if that user inputted data will be used to further train the system or whether it will be otherwise disclosed to third parties. This risk is highest with public facing generative AI systems such as the free, public facing version of ChatGPT, but is also a consideration when understanding how closed system generative AI tools operate.
Practically, this risk can be managed by using generative AI systems that are trained, and accessible only, by your organisation or law firm and which have been carefully vetted to understand how the generative AI system uses data. Key questions to ask include:
- What client information is confidential to the client (particularly in a law firm setting)?
- What data is used to train, and refine the AI system?
- Who has access to user inputs? How does the developer use inputs? Will user inputs be accessible to third parties?
- What security and confidentiality protections are in place?
- If overseas disclosure is an issue, where user input data is processed by a generative AI system.
Key Takeaway: You must always consider who will have access to the data you input into a generative AI system and what the security settings are. If the generative AI system is being adopted by your organisation, consider confirming that an AI Impact Assessment has been undertaken.
Practically, this rule means that the validity of any material presented to a court needs to be independently tested. When material has been produced using a generative AI system, it is imperative that the accuracy and validity of that material is tested and confirmed independently of the system used to generate it. This applies equally to documents submitted to a court and to case citations referenced in court submissions.
Where generative AI material has not been independently verified and is presented to a court, it brings into question not only the behaviour of the lawyer but whether documents provided to the court can be relied upon. For example, in February 2024, an ACT Supreme Court Justice David Mossop identified that a personal character reference submitted in the case of DPP v Khan was likely to have been produced by generative AI. He stated that it was inappropriate to use generative AI to generate references as it “becomes difficult for the court to work out what, if any, weight can be placed upon the facts and opinions set out in them”.[10]
There is no firm position yet in Australia as to whether lawyers must proactively disclose to a court that generative AI has been used in the production of a document being submitted to court. Transparency is not currently part of the ethical duty. However, as more cases of inaccurate material being submitted to courts arise, some international courts have moved to impose a proactive disclosure obligation. However, this position is not uniform as demonstrated by the examples in the below table:
DPP v Khan [2024] ACTSC 19, [43] (available here)
Noting that there are multiple positions in Canada at present and this is based on a sample only.
https://www.courtsofnz.govt.nz/assets/6-Going-to-Court/practice-directions/practice-guidelines/all-benches/20231207-GenAI-Guidelines-Lawyers.pdf
https://www.judiciary.uk/wp-content/uploads/2023/12/AI-Judicial-Guidance.pdf
Click to expand image