1. Introduction
As we enter 2025, the adoption of generative artificial intelligence ("Gen AI") in the financial sector has reached a pivotal moment, presenting both opportunities and challenges. Given the relatively novel nature of Gen AI, how can financial institutions effectively harness its benefits while complying with regulatory requirements?
This article provides a high-level overview of the key benefits and risks associated with Gen AI, as well as recent regulatory developments in Hong Kong* and other major jurisdictions.
2. One-minute quick read
Here are the five key takeaways from this article:
- Gen AI has the potential to transform the way financial institutions provide products and services as well as how they operate and manage risks.
- Gen AI presents material risks to financial institutions, including those relating to accuracy/explainability concerns, potential privacy and intellectual property rights violations, as well as ethical issues and biases, which can undermine trust, reputation and compliance.
- A proactive governance approach to Gen AI is essential, focusing on risk assessment, human oversight, and transparent communications to address the complexities of Gen AI deployment.
- Regulatory landscapes worldwide emphasise ethical principles and risk-based frameworks, encouraging collaboration between the public and private sectors to foster responsible Gen AI utilisation.
- Continuous monitoring and adaptation to regulatory changes are crucial for financial institutions to manage Gen AI risks effectively and ensuring compliance while embracing technological advancements.
3. Gen AI Development Milestones
Despite these developments, president-elect Donald Trump has expressed a critical view of the Biden administration's approach to AI regulation, which indicates there might be a potential shift in the U.S. government’s policy towards AI.
Available at https://republicans-science.house.gov/_cache/files/a/a/aa2ee12f-8f0c-46a3-8ff8-8e4215d6a72b/E4AF21104CB138F3127D8FF7EA71A393.ai-task-force-report-final.pdf. In preparing the report, the Bipartisan House Task Force engaged with more than 100 experts and consulted with business leaders, government officials and academics in preparing the report, which sets out seven guiding principles, 66 key findings and 89 recommendations.
Artificial Intelligence Act. Available at https://artificialintelligenceact.eu/
AI Guidelines for Business Ver 1.0, published by the Ministry of Internal Affairs and Communications Ministry of Economy, Trade and Industry on 19 April 2024, available at https://www.meti.go.jp/shingikai/mono_info_service/ai_shakai_jisso/pdf/20240419_9.pdf
Recent developments regarding Japan's "Basic Law for Promoting Responsible AI" indicate that the Japanese government is planning to introduce legal regulations concerning AI at the beginning of 2025. However, specific details about these regulations remain unclear.
"Draft Interim Summary" published by AI Policy Study Group on 26 December 2024, available at https://www8.cao.go.jp/cstp/ai/ai_senryaku/12kai/shiryou1.pdf (only available in Japanese)
The OECD is an international forum working with over 100 countries that promotes policies aimed at improving economic growth, stability, and the well-being of people globally by fostering collaboration among member countries on various social, economic, and environmental issues. “OECD updates AI Principles to stay abreast of rapid technological developments” published by the OECD on 3 May 2024, available at https://www.oecd.org/en/about/news/press-releases/2024/05/oecd-updates-ai-principles-to-stay-abreast-of-rapid-technological-developments.html
“Policy Statement on Responsible Application of Artificial Intelligence in the Financial Market” published by the Financial Services and the Treasury Bureau on 28 October 2024, available at https://gia.info.gov.hk/general/202410/28/P2024102800154_475819_1_1730087238713.pdf
“High-level Principles on Artificial Intelligence” published by the HKMA on 1 November 2019, available at https://www.hkma.gov.hk/media/eng/doc/key-information/guidelines-and-circular/2019/20191101e1.pdf
“Consumer Protection in respect of Use of Generative Artificial Intelligence” published by the HKMA on 19 August 2024, available at https://www.hkma.gov.hk/media/eng/doc/key-information/guidelines-and-circular/2024/20240819e1.pdf
“Use of Artificial Intelligence for Monitoring of Suspicious Activities” published by the HKMA on 9 September 2024, available at https://www.hkma.gov.hk/media/eng/doc/key-information/guidelines-and-circular/2024/20240909e1.pdf
“Generative Artificial Intelligence in the Financial Services Space” published by the Hong Kong Monetary Authority on 27 September 2024, available at https://www.hkma.gov.hk/media/eng/doc/key-information/guidelines-and-circular/2024/GenAI_research_paper.pdf
“Generative Artificial Intelligence Sandbox” published by the HKMA on 20 September 2024, available at https://www.hkma.gov.hk/media/eng/doc/key-information/guidelines-and-circular/2024/20240920e1.pdf
“HKMA announces inaugural cohort of GenA.I. Sandbox” published by the HKMA on 19 December 2024, available at https://www.hkma.gov.hk/eng/news-and-media/press-releases/2024/12/20241219-5/
“Use of generative AI language models” published by the SFC on 12 November 2024, available at https://apps.sfc.hk/edistributionWeb/gateway/EN/circular/intermediaries/supervision/doc?refNo=24EC55
“Guidance on the Ethical Development and Use of Artificial Intelligence” published by the PCPD on 18 August 2021, available at https://www.pcpd.org.hk/english/resources_centre/publications/files/guidance_ethical_e.pdf
“Artificial Intelligence (AI): Model Personal Data Protection Framework” published by the PCPD on 11 June 2024, available at https://www.pcpd.org.hk/english/resources_centre/publications/files/ai_protection_framework.pdf
Provisions on Administration of Algorithmic Recommendation in the Internet Information Service (《互联网信息服务算法推荐管理规定》), available at https://www.gov.cn/zhengce/zhengceku/2022-01/04/content_5666429.htm (only available in Chinese)
Provisions on the Administration of Deep Synthesis of Internet-based Information Service (《互联网信息服务深度合成管理规定》), available at https://www.gov.cn/zhengce/zhengceku/2022-12/12/content_5731431.htm (only available in Chinese)
Interim Measures for the Administration of Generative Artificial Intelligence Services (《生成式人工智能服务管理暂行办法》), available at https://www.cac.gov.cn/2023-07/13/c_1690898327029107.htm (only available in Chinese)
Such as the Basic Security Requirements for Generative Artificial Intelligence Service (《生成式人工智能服务安全基本要求》), available at https://www.tc260.org.cn/upload/2024-03-01/1709282398070082466.pdf (only available in Chinese)
Measures for Labelling Artificial Intelligence Generated Synthetic Content (Draft) (《人工智能生成合成内容标识办法(征求意见稿)》), available at https://www.cac.gov.cn/2024-09/14/c_1728000676244628.htm (only available in Chinese)
The public consultation of the draft was closed on 14 October 2024.
《九龙坡区网信办依法对属地一互联网企业作出行政处罚》, available at https://mp.weixin.qq.com/s/ZnS_AlJmDzMkkt28Blt8JA (only available in Chinese)
4. Gen AI in the financial services sector: potential use cases and risks
Traditionally, the use of artificial intelligence (“AI”) in the financial services sector mainly focuses on decision-making and predictions by processing structured data. While AI enhances model interpretability and automates operational workflows—particularly in areas utilising robotic process automation, such as algorithmic trading and risk assessment—Gen AI goes a step further.
In contrast to traditional AI, Gen AI focuses on creating content and ideas, addressing more complex decisions, and facilitating richer interactions between humans and machines. Gen AI supports data consolidation and generative search, significantly improving risk management and anti-fraud efforts.
The potential use cases for Gen AI in financial services are extensive and transformative, enabling better resource allocation and the development of more innovative solutions for customers. Gen AI may also present several potential significant risks. These use cases and risks are summarised in the table below.
5. Overview of the global Gen AI regulatory developments
Different jurisdictions have implemented various regulatory approaches to address the risks associated with Gen AI, taking into account their distinct economic, social and legal environments.
United States
For example, in the United States, the Biden Administration issued the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence in October 2023. The executive order is aimed at ensuring the responsible development and deployment of AI technologies. Among other things, the executive order emphasises the establishment of standards for AI safety and security, the promotion of fairness and transparency in AI systems, and the protection of privacy and civil liberties. Additionally, it calls for increased investment in AI research and development and the creation of a national AI task force to coordinate efforts across U.S. federal agencies. The executive order also highlights the importance of international collaboration and addresses the potential economic and workforce impacts of AI[1].
Key US financial regulators, including the US federal banking agencies, the Securities and Exchange Commission (“SEC”), the Commodity Futures Trading Commission (“CFTC”) and the Consumer Financial Protection Bureau (“CFPB”), have also been actively addressing the implications of AI in the financial sector, releasing proposals or guidelines to ensure that AI technologies are used responsibly and do not harm consumers or the stability of financial markets. Key areas of focus include the use of AI in credit scoring, trading algorithms, and fraud detection. Regulators are also working to prevent biases in AI systems that could lead to discriminatory practices and are emphasising the need for transparency and accountability in the use of AI by financial institutions.
As far as the legislative branch of the US government is concerned, the Bipartisan Artificial Intelligence Task Force of the US House of Representatives (“Bipartisan House Task Force”) recently published a comprehensive report on AI in December 2024.[2] In terms of the financial services sector, the report recommends:
- fostering an environment where financial services firms can responsibly adopt the benefits of AI technology;
- encouraging and resourcing regulators to increase their expertise with AI;
- maintaining consumer and investor protections in the use of AI in financial services;
- considering the merits of regulatory sandboxes that could allow regulators to experiment with AI applications;
- supporting a principles-based regulatory approach that can accommodate rapid technological changes; and
- ensuring that regulations do not impede small firms from adopting AI tools.
European Union
While the US so far has not enacted any new federal legislation to address AI, the European Union (“EU”) has introduced its first legislative framework – the Artificial Intelligence Act[3] (“EU AI Act”). Aspects of the EU AI Act took effect in August 2024, and the remaining portions will be implemented in phases with most requirements coming into effect in August 2026. The EU AI Act has adopted a risk-based approach by categorising AI systems into four risk levels, each with specific compliance requirements. The EU AI Act applies to AI system providers, deployers, importers, and distributors, regardless of whether they are based inside or outside the EU, as long as the relevant AI system is placed on the EU market or its use impacts people in the EU.
Japan
Japan's regulatory approach to Gen AI revolves around non-binding guidelines instead of formal regulations, setting it apart from the stricter frameworks found in certain other jurisdictions. In April 2024, Japan introduced the "AI Guidelines for Business Version 1.0,"[4] developed by the Ministry of Internal Affairs and Communications and the Ministry of Economy, Trade and Industry, which encourages businesses to voluntarily adopt responsible AI practices. These guidelines emphasise principles for AI governance aimed at maximising benefits while managing associated risks. Although there has been a proposal for a "Basic Law for Promoting Responsible AI" (provisional title), which suggests a potential move toward enforceable regulations akin to the EU AI Act, the current regulatory focus remains on promoting voluntary compliance.[5]
The AI Policy Study Group, an expert panel established by the Cabinet Office of Japan to develop the legal framework for AI regulation, published a "Draft Interim Summary"[6] on 26 December 2024. This summary emphasises the necessity of implementing certain legal regulations. Importantly, it concludes with a remark stating: "Care must be taken to ensure that legal frameworks do not hinder the promotion of innovation. It is expected that Japan will promptly develop systems, including legal frameworks related to AI, that serve as a model for other countries, making AI research, development, and implementation as seamless as possible". This indicates that future AI regulations in the Japan are likely to be not overly restrictive.
Overall, Japan's regulatory landscape is characterised by an emphasis on soft law and voluntary adherence, while also laying the groundwork for possible future regulations.
6. Global standards: OECD AI Principles
The Organisation for Economic Co-operation and Development (“OECD”)[7] has published a set of AI principles, which have been endorsed by the G20. The OECD’s AI principles emphasise respect for fundamental rights, sustainability, transparency, and robust risk management. The OECD AI principles create a comprehensive framework for promoting the responsible deployment of AI and serve as a guide for regulators worldwide.
In May 2024, the OECD updated its principles to emphasise the crucial role of global regulators in ensuring that AI developers and users operate ethically and responsibly. According to the OECD AI principles:
- Inclusive Growth and Well-Being: Stakeholders should actively manage trustworthy AI to enhance human capability, creativity, and inclusivity while also reducing inequalities and protecting the environment for sustainable development.
- Fundamental Rights and Values: AI developers and operators must uphold fundamental rights and values throughout the AI lifecycle. This includes, among other things, ensuring non-discrimination, protecting privacy, and safeguarding certain freedoms. Safeguards should be implemented to mitigate the potential for misuse and associated risks.
- Transparency and Explainability: AI developers should prioritise transparency by providing clear information about the capabilities of AI systems, their data sources, and their decision-making processes. This transparency enables affected individuals to understand and challenge the outputs of these systems.
- Robustness and Safety: AI systems must be secure and safe throughout their lifecycle. Mechanisms should be established to prevent harm and ensure safe decommissioning or correction, if necessary, while also maintaining the integrity of the information involved.
- Accountability: Organisations that use AI must ensure that AI systems function properly and adhere to established principles. This includes maintaining traceability of datasets, processes, and decisions to promote transparency and facilitate analysis. They should implement a systematic risk management approach throughout the AI lifecycle and encourage responsible conduct to address risks such as biases, human rights issues, safety, and privacy. Collaboration with all relevant stakeholders is essential in this effort.
7. Gen AI-related regulatory developments in Hong Kong
As a leading international financial centre, Hong Kong adopts a balanced approach that embraces technological innovation while addressing the unique legal, ethical, and societal implications of using Gen AI. Below is a thematic overview of some recent Hong Kong regulatory guidelines on Gen AI.
Financial Services and the Treasury Bureau
In October 2024, the Hong Kong Financial Services and the Treasury Bureau (“FSTB”) released the Policy Statement on Responsible Application of Artificial Intelligence in the Financial Market[8] (“FSTB Policy”) to lay down a framework for the responsible use of AI in the Hong Kong financial sector, proposing a dual-track approach to simultaneously fosters the advancement of AI technologies while addressing critical concerns such as cybersecurity, data privacy, and intellectual property protection.
The FSTB Policy identifies “three Ds” that characterise the deployment of AI in financial services:
- Data-driven: AI can enhance the analysis of data in the financial industry, significantly increasing efficiency and improving insights and decision-making processes.
- Double-edged: While AI can greatly benefit financial institutions, its misuse can lead to significant risks. As such, AI should not replace human judgment but should complement human capabilities to enable financial institutions to generate informed and effective decisions.
- Dynamic: AI is expected to catalyse innovation, helping to create new products and services and contributing positively to the overall ecosystem of the financial sector.
The FSTB Policy encourages the financial services industry to leverage available AI models and infrastructure. Other governmental agencies and public organisations will also be involved in the Gen AI regulatory framework. For instance, the Hong Kong Police Force will collaborate with international organisations, law enforcement agencies, and the AI industry to address challenges related to cyber policing. Meanwhile, the Investor and Financial Education Council will work to raise public awareness and understanding of AI's opportunities and risks in retail investing and financial management.
Hong Kong Monetary Authority
The Hong Kong Monetary Authority (“HKMA”) has taken a proactive approach to guiding financial institutions on AI regulation by publishing a series of whitepapers and circulars. In November 2019, before Gen AI became prevalent, the HKMA issued the High-level Principles on Artificial Intelligence[9] to guide authorized institutions in adopting AI applications. These principles focus on strong governance, ethical standards, risk management, data quality, and transparency, ensuring responsible usage while addressing opportunities and challenges in the evolving AI landscape. In the second half of 2024, the HKMA issued further guidance to the banking sector on Gen AI use in customer-facing applications from a consumer protection perspective[10] and the use of AI to monitor money laundering and terrorist financing risks[11].
More recently in September 2024, the HKMA published a comprehensive research paper titled Generative Artificial Intelligence in the Financial Services Space[12] (“Gen AI Paper”), which examines the significant potential of Gen AI to transform the financial services industry. According to the Gen AI Paper, the regulatory regime for Gen AI across various jurisdictions emphasises a structured approach based on principles designed to promote ethical usage while mitigating associated risks. These principles, in line with the OECD AI principles, are broadly divided into two categories: Common Principles and Additional Principles. The Common Principles focus on establishing governance and accountability, ensuring fairness, protecting data privacy, and promoting transparency to build trust in AI systems. The Additional Principles emphasise the reliability and sustainability of AI technologies, highlighting the importance of performance resilience and minimising their environmental and social impacts.
In addition, the HKMA has initiated the Gen AI Sandbox[13], inviting authorized institutions to develop and test innovative AI solutions in a controlled environment. The goal is to enhance financial services practices through targeted supervisory feedback and collaboration within the fintech ecosystem. The inaugural cohort of the Gen AI Sandbox includes 15 use cases from 10 banks and four technology partners, focusing primarily on risk management, anti-fraud measures, and customer experience[14]. Participants will onboard to the Artificial Intelligence Supercomputing Centre operated by Cyberport, with technical trials expected to commence early this year, and insights from these trials will be shared to inform future industry practices.
Securities and Futures Commission
The Securities and Futures Commission of Hong Kong (“SFC”) issued a circular in November 2024 titled Use of generative AI language models[15] (“SFC Circular”), which addresses the use of Gen AI language models by licensed corporations (“LCs”). Recognising the potential of widely accessible commercial and open-source AI language models to enhance customer interactions and optimise internal processes, the SFC notes that these technologies can improve overall productivity and enable LCs to allocate human resources to more value-added tasks. However, the SFC also cautions that deploying AI language models introduces significant risks, including the potential for inaccurate and biased outputs as well as cybersecurity threats.
To navigate these challenges, the circular outlines four core principles that LCs need to implement:
- 1st Core Principle: Senior Management Responsibilities –The SFC stresses the importance of robust senior management oversight and the establishment of comprehensive risk management frameworks throughout the AI model lifecycle.
- 2nd Core Principle: AI Model Risk Management – The SFC highlights the necessity for stringent AI model risk management practices, which include thorough validation, ongoing performance monitoring, and a clear understanding of the limitations of AI language models.
- 3rd Core Principle: Cybersecurity and Data Risk Management - The circular outlines the importance of strong cybersecurity measures to safeguard against data breaches and other potential attacks.
- 4th Core Principle: Third-Party Provider Risk Management - SFC underscores the significance of effective third-party provider risk management, urging LCs to thoroughly evaluate their collaborations and ensure that partnerships meet compliance and security standards.
The SFC encourages LCs to adopt a risk-based approach in line with the above core principles while remaining proactive in adapting to the fast-evolving landscape of AI technologies. In particular, LCs planning to use AI language models in high-risk scenarios must notify the SFC of any significant changes to their business operations and services. Generally speaking, the SFC considers using Gen AI for providing investment recommendations, investment advice or investment research to investors or clients (other than after sales client servicing) as high-risk use cases. The SFC has prescribed a list of risk mitigation measures for LCs to adopt when deploying AI in high-risk use cases, which specifically includes a reference to having a “human in the loop” to address hallucination risks and review the AI language model’s output for factual accuracy. However, the SFC also acknowledged in the SFC Circular that, depending on the specific circumstances of the relevant high-risk use case, it may consider providing flexibility to LCs in the implementation of this requirement.
Additionally, LCs are encouraged to engage with the SFC early in the planning and development stages to mitigate potential regulatory challenges. As the field of Gen AI continues to advance, the SFC will commit to engaging with the industry to create more specific guidance on effectively managing Gen AI associated risks.
Going forward, the Hong Kong government and financial regulators will reflect AI-related risks in relevant regulations and guidelines, continuously reviewing and updating them to keep pace with AI developments and international practices.
Office of the Privacy Commissioner for Personal Data
Personal data privacy is a significant risk consideration related to AI. In response, the Office of the Privacy Commissioner for Personal Data (“PCPD”) released the Guidance on the Ethical Development and Use of Artificial Intelligence[16], which emphasises three data stewardship values: respect, benefit, and fairness. The guidance also encourages the adoption of seven ethical AI principles, including accountability and data privacy. Building on this earlier guidance, the PCPD also released the Artificial Intelligence (AI): Model Personal Data Protection Framework[17] in June 2024 (“PCPD 2024 Framework”). The framework offers practical recommendations for organisations in procuring, implementing, and using AI, including Gen AI, while adhering to the requirements of the Personal Data (Privacy) Ordinance. The PCPD 2024 Framework recommends the following:
- Establish AI Strategy and Governance — Organisations should formulate an AI governance strategy generally comprising (i) an AI strategy, (ii) governance considerations for procuring AI solutions, and (iii) an AI governance committee (or similar body) to steer the process.
- Conduct Risk Assessment and Human Oversight — Organisations should take on a risk-based approach to mitigate the risks of using AI, including determining the necessary level of human oversight.
- Customise AI Models and Implement and Manage of AI Systems — Continuous monitoring, review and user support are essential after the adoption of an AI model to ensure that the AI systems remain effective, relevant and reliable.
- Communicate and Engage with stakeholders — Organisations should communicate and engage effectively and regularly with stakeholders, particularly internal staff, AI suppliers, individual customers, and regulators.
8. Chinese Mainland’s Gen AI regulatory regime
With the rapid advancement of Gen AI technologies and their implications for society, Chinese Mainland has established a comprehensive regulatory regime over the past few years to oversee this evolving field. In addition to the three most important laws in the fields of cybersecurity, data compliance, and personal information protection that Gen AI must adhere to (namely, the Cybersecurity Law, the Data Security Law, and the Personal Information Protection Law), the key regulations in the Gen AI field include:
- the Provisions on Administration of Algorithmic Recommendation in the Internet Information Service[18] in 2021
- the Provisions on the Administration of Deep Synthesis of Internet-based Information Service[19] in 2022
- the Interim Measures for the Administration of Generative Artificial Intelligence Services[20] (“AIGC Interim Measures”), which took effect in August 2023 (together, amongst other relevant law and regulations[21], form the “AIGC Regulatory Framework”).
These regulations not only provide a legal framework for the application of Artificial Intelligence Generated Content (“AIGC”) but also set specific compliance requirements for industries that utilise generative AI content. In particular, the AIGC Interim Measures, issued by the Cyberspace Administration of China (“CAC”) in collaboration with six other governmental bodies, represent landmark measures in Chinese Mainland specifically targeting the provision of Gen AI services by organisations and individuals, including the creation of text, images, audio, and video content for the public in the Chinese Mainland. It is noteworthy that the AIGC Interim Measures do not apply to those organisations or individuals (1) who use Gen AI technology for “research and purpose” only and those (2) who use Gen AI technology for internal use only (e.g. among employees).
Under the AIGC Regulatory Framework, AIGC service providers that utilise algorithms with public opinion attributes or social mobilisation capabilities are required to register or file their algorithms according to established guidelines. The AIGC Regulatory Framework mandates security assessments for Gen AI applications, and service providers must maintain algorithm transparency and undergo continuous monitoring and optimisation. Additionally, Gen AI service providers must protect end users by obtaining consent when collecting personal information and allowing users the right to opt out of commercial marketing.
For financial institutions, compliance with Gen AI regulations involves establishing a comprehensive risk management framework for AI algorithms and models, obtaining the necessary approvals, and completing required registrations and filings. The development and evaluation phases of AI algorithms and models must remain independent, with reassessments and optimisations conducted at least annually. Furthermore, Chinese Mainland banks are also prohibited from outsourcing their responsibilities related to Gen AI model management. The board of directors and senior management must thoroughly understand the roles and limitations of the Gen AI models they implement, ensuring robust governance to manage the complexities and risks associated with generative AI technologies.
We observe there have been some recent preliminary applications of Gen AI in the fields of securities, banking and payments in Chinese Mainland. For example, a major Chinese securities firm has utilised Gen AI for customer service and investment advice. Similarly, a state-owned bank has applied Gen AI across various business areas, including intelligent assistants for employees, knowledge operation assistants, and financial market research. Additionally, the Gen AI tools developed by a predominant payment and merchant services company, which can function as an office assistant, provide programming assistance, facilitate intelligent testing, conduct big data analysis, etc., have won the Second Prize of the PBOC’s Fintech Development Award for their in-depth research and application in the field of payment technology.
Building on the existing regulations and laws, the CAC has proposed a consultation draft regulation titled Measures for Labelling Artificial Intelligence Generated Synthetic Content[22], aimed at regulating AIGC[23]. This draft regulation provides detailed regulations on AIGC to foster healthy AI development while protecting the rights of citizens, legal entities, and other organisations, as well as safeguarding public interests.
The security of content generated by Gen AI is a top priority for Chinese Mainland regulators, as it relates to national security, social stability, and the protection of individual legal rights and interests. The AIGC Interim Measures prioritise "the capability to mobilise public opinion" as a key regulatory focus, highlighting the importance of content security. In June 2024, the CAC of Jiulongpo District in Chongqing imposed administrative penalties[24] on the operator of the "Kaishan Monkey" (开山猴) AI writing website based on the Cybersecurity Law and the AIGC Interim Measures, citing the reason of “violating the prohibition on generating information that is banned by laws and regulations.” In addition to content security, Chinese Mainland regulators are also concerned about data security and algorithm risks. The focus regarding data security includes ensuring that data sources are legitimate, validly authorised and do not infringe on the legitimate interests of other subjects (e.g. IP rights). For algorithm risks, the key concerns are the transparency of the algorithms and the controllability of the algorithm-generated outputs. We anticipate Chinese Mainland regulators will continue their regulatory efforts on enacting Gen AI related legislation and enforcing regulations in these areas.
9. How should financial institutions approach Gen AI risk governance?
While regulatory frameworks for Gen AI are still being developed, it is advisable for financial institutions to take a proactive approach to manage the governance and risk management issues associated with Gen AI, rather than waiting for regulations to be fully developed.
A key strategy for financial institutions is to establish a Gen AI risk governance process that ensures the lawful, ethical, and safe development and deployment of AI systems. While the specific risk governance structure may differ according to each organisation’s circumstances and local regulatory requirements, we outline three core features that are essential for effective risk governance:
I. The board of directors (or equivalent ultimate decision-making body) of an organisation should establish clear AI principles that reflect the organisation’s values. These high-level principles will guide the organisation’s overall risk governance approach, and should be communicated to employees and potentially affected third parties.
II. AI principles must be integrated into the organisation through a comprehensive AI governance framework that includes an AI strategy for managing risks, appropriate governance processes—such as an AI Risk Committee—and clearly defined accountability roles with appropriate oversight from senior management and the board of directors (or equivalent body).
III. The AI governance framework should include tailored operational processes, such as inventory assessments, AI impact assessments, policies and procedures, information management systems, and training programs, to ensure effective implementation throughout the organisation.
Organisations should perform AI impact assessments at the beginning of a Gen AI project and at key stages throughout its development. This approach helps maximise the benefits of Gen AI while identifying potential risks, especially concerning the use of personal data. In high-risk scenarios involving the use of Gen AI, organisations should also maintain ongoing communication with relevant regulatory authorities. In summary, having robust risk governance processes that take into account legal, ethical, and regulatory issues is essential for the responsible use of Gen AI and for meeting regulatory expectations.
10. How can I find out more about Gen AI and related laws and regulations?
KWM’s fully bilingual financial regulatory and financial markets teams have extensive experience advising financial institutions, fund houses and fintech companies on a broad range of matters related to Gen AI, digital assets, emerging fintech and financial regulation.
We are familiar with the unique and nuanced commercial and legal issues faced by financial institutions, asset managers and fintech companies in the fast-evolving Gen AI regulatory landscape in Hong Kong, Chinese Mainland, United States, Australia, Japan and Europe. We can provide a range of support for your AI-related initiatives and projects, including helping you design and implement a comprehensive AI governance framework.
Come speak to us - we would be pleased to share our further insights with you.
*In this article, “Hong Kong” or “HK” means the “Hong Kong Special Administrative Region of the People’s Republic of China”, and “PRC” or “Chinese Mainland” means the People’s Republic of China, excluding Hong Kong, Macao Special Administrative Region and Taiwan.