IOSCO Publishes Guidance on Supervising AI/Machine Learning

The Board of the International Organization of Securities Commissions (IOSCO) has published guidance to help its members regulate the use of AI and ML

Artificial intelligence (AI) and machine learning (ML) are increasingly used by orgnaisations due to a combination of increased data availability and computing power. Although the use of this technology may have benefits, such as speed and accuracy, it may also create or amplify certain risks. 

The Board of the International Organization of Securities Commissions (IOSCO), which is an international body for regulators in securities markets, has published guidance to help its members regulate and supervise the use of artificial intelligence (AI) and machine learning (ML) by market intermediaries and asset managers, following a consultation report published in June and the growth of technology in the financial sector. 

 

What does the guidance include? 

 

The report notes that the rise in the use of electronic trading platforms and the increasing availability of data have led firms to progressively use AI and ML in their trading and advisory activities, and risk management and compliance functions. Consequently, regulators are focusing on the use and control of AI and ML in financial markets to mitigate the potential risks and prevent consumer harm.

IOSCO encouraged regulators to require market intermediaries and asset managers that use AI and ML to do the following:

  1. Ensure senior management oversees the development and controls of AI and ML, including a documented internal governance framework for accountability
  2. Repeatedly validate the results of their uses of AI and ML to confirm (i) expected behavior in stressed and unstressed market conditions and (ii) compliance with regulatory obligations
  3. Have the expertise necessary to understand and challenge the produced algorithms, and to conduct due diligence
  4. Have a service level agreement that sets the scope of the outsourced functions with clear performance indicators, and rights and remedies for poor performance;
  5. Disclose meaningful information as to their AI and ML use (and regulators should determine the information they need from firms for appropriate oversight)
  6. Have controls in place to ensure that the data on which AI and ML is dependent prevents biases and otherwise considers ethical aspects of the use of the technology, such as privacy, accountability, explainability and auditability.

IOSCO noted that members and firms should "consider the proportionality of any response" when seeking to implement such measures, adding that the regulatory framework may need to "evolve in tandem to address the associated emerging risks."

 

Share

Featured Articles

AI and Broadcasting: BBC Commits to Transforming Education

The global broadcaster seeks to use AI to make its education offerings personalised and interactive to encourage young people to engage with the company

Why Businesses are Building AI Strategy on Amazon Bedrock

AWS partners such as Accenture, Delta Air Lines, Intuit, Salesforce, Siemens, Toyota & United Airlines are using Amazon Bedrock to build and deploy Gen AI

Pick N Pay’s Leon Van Niekerk: Evaluating Enterprise AI

We spoke with Pick N Pay Head of Testing Leon Van Niekerk at OpenText World Europe 2024 about its partnership with OpenText and how it plans to use AI

AI Agenda at Paris 2024: Revolutionising the Olympic Games

AI Strategy

Who is Gurdeep Singh Pall? Qualtrics’ AI Strategy President

Technology

Should Tech Leaders be Concerned About the Power of AI?

Technology