Insight,

5 key considerations to think about when investing in Artificial Intelligence

HK | EN
Current site :    HK   |   EN
Australia
China
China Hong Kong SAR
Japan
Singapore
United States
Global

As organisations increasingly adopt a diverse range of artificial intelligence (AI) products for various purposes, the need to understand, highlight and mitigate risks inherent in the development, deployment and use of AI systems is becoming extremely important. In this article, we have summarised 5 key considerations for investors to consider when investing in AI systems.

Reflected in the rapid growth of ChatGPT after its initial release in November 2022 (reaching 100 million monthly active users in just 6 weeks), the market for AI is expected to show strong growth in the foreseeable future. Startups developing generative AI products, which use algorithms to create new content, have already generated more than $1 billion in revenue, and enterprise buyers continue to demand AI applications tailored to their needs.

While there is no consensus on the definition of AI, it is best to think of it as an umbrella term with no rigid boundaries. At its core, AI is a combination of data and technology that seeks to emulate the decision making that humans do naturally. The hallmark of AI is that it is dynamic, iterative and designed to change and refine over time.

Potential regulatory and legal exposure points will vary depending on the type of AI that is being implemented, the datasets upon which they have been trained/tested upon and how the AI system itself is used. However, from an investor's point of view, key considerations when considering any investment in an AI-centric business include:

IP and copyright

A primary concern of investors in a business that is dependent upon an AI system will be ownership of intellectual property (IP) rights in the AI system itself. This will involve consideration of who was involved in the development of the AI system, including who was responsible for writing or developing the algorithms used to run the AI system. This is not all that different from any tech-based business. However, in the case of AI, separate IP issues can also arise in relation to the training data used to train the AI system, the ownership of the output and the use of the output produced by the AI system. These are all matters an investor should take care to understand as they will directly impact the value of the AI system. 

AI systems may be trained in a way that infringes IP rights, leading to it create output that also infringes those rights. Where an AI-generated work does infringe copyright, questions may arise as to whether the user of the AI system or the owner of the AI system is responsible. Potential liability may also be affected by contractual arrangements that are in place between the owner/developer and the user. Use of AI systems may also lead to an inadvertent waiver of IP rights where the owner/developer of the AI system provides a limited licence to end users to use the AI system, but retains IP ownership over any outputs generated – that may be a significant issue where the user wishes to commercially exploit the outputs independently of the owner/developer. It will obviously also be important for any investor when placing a value on the AI business. Accordingly, these matters will be an important area for due diligence before committing to any investment.

Privacy and confidentiality

To the extent that training or input data used by an AI system contains information about identifiable individuals, the use of that data may raise privacy concerns. Potential investors should be cognisant of this, and confirm that appropriate privacy consents have been obtained for the AI system to use any personal information. Even where de-identified personal data is used, the risk that the data could be reasonably re-identified when combined with other information used by the AI system should be considered. AI systems are capable of creating or inferring personal information, and this may also give rise to privacy obligations.

Privacy should also be considered when using an AI model hosted by a third party, as information inputted to an AI model may not be secure and could in fact constitute a disclosure to that third party. This presents a risk in the case of confidential or commercially sensitive information. This is a key reason why some businesses have been cautious about providing their information to public-facing versions of many large AI systems – in many cases, businesses may wish to train their own ‘proprietary’ AI systems based only on their own input data in a closed and controlled environment. Again, this will be important for any investor to understand, as it may mean that any investment in the AI system does not extend to data used for training purposes, and also that there may be many separate systems rather than one unified system based on the same underlying AI engine, which may make the underlying product harder to commercialise.

Accuracy and reliability

When there are issues in the training data or system design of an AI system, it is common for an end user to see errors, variability and/or misrepresentations in the output of the AI system. This can be particularly problematic if an AI system is used to make decisions affecting individuals in a significant way, and if bias emerges in the AI system such that it favours certain discriminatory outputs or perpetrates harmful stereotypes or power dynamics.

The reasoning behind an AI system may also be difficult to clearly explain, particularly where machine learning is involved. This is known as the ‘black box problem’, where it is unclear how an AI system has reached a certain conclusion. Accordingly, it will be prudent to ensure that an AI system’s design (including its assumptions and how it makes its choices) is transparent with appropriate moderation tools applied, and to consider which party holds responsibility where an output is found to be inaccurate, misleading, biased or unfair. From an investor’s perspective, there are potentially significant reputational risks that need to be considered. Media stories abound about weird, wonderful and occasionally dangerous ‘hallucinations’ produced by AI systems, as well as ways in which AI systems can be exploited to generate harmful or offensive material. Unless appropriate safeguards are in place, this may turn off corporate customers and lead to a loss of confidence amongst consumers. Accordingly, in order to ensure the value of the underlying system is adequately protected, this is another important are for investors to cover in due diligence.

Regulation

As the global regulatory landscape on AI continues to evolve, it will be important for investors to consider the potential impact of new regulations on AI systems. These may also differ according to the applicable jurisdiction.

Given the present lack of AI-specific laws and regulations in Australia, the legal implications of AI will depend on the application of existing laws to AI Systems. These include, but are not limited to, privacy laws, consumer laws, IP laws and anti-discrimination laws.

The proposed EU AI Act is likely to influence how many regulators approach AI regulation around the world. The EU AI Act adopts a horizontal risk-based approach where the obligations on providers, deployers, importers and distributors of AI systems increase with the level of risk associated with the AI system. The proposal will only become law once both the Council and the European Parliament agree on a common version of the text. 

Changes in these laws to respond to AI may either support or undermine different AI use cases. Accordingly, before committing to an AI-based business, it will be important for investors to gain a thorough understanding of both the current and future regulatory landscape, to ensure that there are no likely blockers for their relevant business plans.

Effective governance frameworks

Given the unique risks associated with AI, it is important to have effective governance processes specific to AI. Potential investors in companies deploying AI systems should consider whether there are appropriate governance processes in place for those systems. This will be a key way to ensure that the other risks mentioned above have been identified and appropriately addressed in a way that will protect the underlying value of the business.

A best practice governance framework may involve the following:

  • ‘AI Principles’ to define the overall approach for how AI can be responsibly used and/or developed in the organisation.These principles may include a commitment to transparency, accountability and fairness;
  • an ‘AI Governance Framework’ setting out policies and procedures that will implement the ‘AI Principles’ in practice, such as clearly defined roles and responsibilities for decision-making and oversight; and
  • processes and practises which operationalise the ‘AI Principles’ and ‘AI Governance Framework’ by prompting the organisation to think critically about AI and how to manage the risks it poses. For example, conducting ‘AI Impact Assessments’ to ensure that all risks and available management strategies are considered, developing ‘Incident Response Plans’ to deal with the potential consequences of misuse or malfunction and providing appropriate training for employees, contractors and other relevant members of the organisation.

Bonus: environmental impact

As more complex AI models develop, there is a growing need for more computing power to process the data, as well as data centres to host such data. This not only creates competition in the market for access to processing power and data centres, but has led to growing concerns on the negative impact of AI on the environment, including the e-waste produced by AI technology and its increasingly large carbon footprint. Accordingly, strategies for the sustainability and long-term viability of AI systems, along with associated ESG risks, are also an important factor for investors to consider.

More in this edition
Venture Capital

KWM's dedicated venture capital team supports VC investors, founders and their companies through the entire investment lifecycle.