IOSCO Publishes Guidance on Supervising AI/Machine Learning
Artificial intelligence (AI) and machine learning (ML) are increasingly used by orgnaisations due to a combination of increased data availability and computing power. Although the use of this technology may have benefits, such as speed and accuracy, it may also create or amplify certain risks.
The Board of the International Organization of Securities Commissions (IOSCO), which is an international body for regulators in securities markets, has published guidance to help its members regulate and supervise the use of artificial intelligence (AI) and machine learning (ML) by market intermediaries and asset managers, following a consultation report published in June and the growth of technology in the financial sector.
What does the guidance include?
The report notes that the rise in the use of electronic trading platforms and the increasing availability of data have led firms to progressively use AI and ML in their trading and advisory activities, and risk management and compliance functions. Consequently, regulators are focusing on the use and control of AI and ML in financial markets to mitigate the potential risks and prevent consumer harm.
IOSCO encouraged regulators to require market intermediaries and asset managers that use AI and ML to do the following:
- Ensure senior management oversees the development and controls of AI and ML, including a documented internal governance framework for accountability
- Repeatedly validate the results of their uses of AI and ML to confirm (i) expected behavior in stressed and unstressed market conditions and (ii) compliance with regulatory obligations
- Have the expertise necessary to understand and challenge the produced algorithms, and to conduct due diligence
- Have a service level agreement that sets the scope of the outsourced functions with clear performance indicators, and rights and remedies for poor performance;
- Disclose meaningful information as to their AI and ML use (and regulators should determine the information they need from firms for appropriate oversight)
- Have controls in place to ensure that the data on which AI and ML is dependent prevents biases and otherwise considers ethical aspects of the use of the technology, such as privacy, accountability, explainability and auditability.
IOSCO noted that members and firms should "consider the proportionality of any response" when seeking to implement such measures, adding that the regulatory framework may need to "evolve in tandem to address the associated emerging risks."