On 12 November 2024, the Securities and Futures Commission of Hong Kong (“SFC“) published a circular on the use of generative artificial intelligence language models (“AI LMs“) in respect of SFC licensed corporations (“LCs“) offering services or functionality provided by AI LMs or AI LM-based third party products in relation to their regulated activities (“Circular“). The SFC acknowledges that the AI LMs can be, or are being, used by LCs to respond to client enquiries, summarize information, generate research reports, identify investment signals, and generate computer code.
The SFC endorses the responsible use of AI LMs to promote innovation and enhance operational efficiency. In the Circular, the SFC reminds LCs about the risks associated with the use of AI LMs, including hallucination risks, biases, cyberattacks, inadvertent leakage of confidential information, and breaches of personal data privacy and intellectual property laws. The SFC also sets out expectations for LCs using AI LMs, including implementing effective policies, procedures, and internal controls. These include that LCs should ensure senior management oversight and governance, proper model risk management, effective cybersecurity, and data risk management.
Further, the SFC considers the deployment of AI LMs for providing investment recommendations, advice, or research to investors or clients as high-risk applications. Consequently, LCs are required to implement additional risk mitigation measures for such high-risk applications, including the necessity of involving human intervention to address hallucination risks and ensure the factual accuracy of the AI LM’s output before communicating it to the user. The SFC reminds LCs intending to adopt AI LMs in high-risk use cases that they must comply with the notification requirements under the Securities and Futures (Licensing and Registration) (Information) Rules and they are encouraged to discuss their plans with the SFC.
According to the Circular, it is observed that the general approach adopted by the SFC is that the obligation to provide correct information lies with LCs’ responsibility and professional duties. If there is incorrect information, the fault does not lie with the AI LMs but with the users, specifically the LCs in this instance, who have failed to carry out the necessary verifications. This approach is not new, and the use of AI LMs will depend on the effectiveness of human oversight as well as the ability to identify tasks for which AI LMs can and should be utilized.
Full version of the Circular can be found here.