IOSCO Publishes Guidance on Supervising AI/Machine Learning

The Board of the International Organization of Securities Commissions (IOSCO) has published guidance to help its members regulate the use of AI and ML

Artificial intelligence (AI) and machine learning (ML) are increasingly used by orgnaisations due to a combination of increased data availability and computing power. Although the use of this technology may have benefits, such as speed and accuracy, it may also create or amplify certain risks. 

The Board of the International Organization of Securities Commissions (IOSCO), which is an international body for regulators in securities markets, has published guidance to help its members regulate and supervise the use of artificial intelligence (AI) and machine learning (ML) by market intermediaries and asset managers, following a consultation report published in June and the growth of technology in the financial sector. 

 

What does the guidance include? 

 

The report notes that the rise in the use of electronic trading platforms and the increasing availability of data have led firms to progressively use AI and ML in their trading and advisory activities, and risk management and compliance functions. Consequently, regulators are focusing on the use and control of AI and ML in financial markets to mitigate the potential risks and prevent consumer harm.

IOSCO encouraged regulators to require market intermediaries and asset managers that use AI and ML to do the following:

  1. Ensure senior management oversees the development and controls of AI and ML, including a documented internal governance framework for accountability
  2. Repeatedly validate the results of their uses of AI and ML to confirm (i) expected behavior in stressed and unstressed market conditions and (ii) compliance with regulatory obligations
  3. Have the expertise necessary to understand and challenge the produced algorithms, and to conduct due diligence
  4. Have a service level agreement that sets the scope of the outsourced functions with clear performance indicators, and rights and remedies for poor performance;
  5. Disclose meaningful information as to their AI and ML use (and regulators should determine the information they need from firms for appropriate oversight)
  6. Have controls in place to ensure that the data on which AI and ML is dependent prevents biases and otherwise considers ethical aspects of the use of the technology, such as privacy, accountability, explainability and auditability.

IOSCO noted that members and firms should "consider the proportionality of any response" when seeking to implement such measures, adding that the regulatory framework may need to "evolve in tandem to address the associated emerging risks."

 

Share

Featured Articles

Andrew Ng Joins Amazon Board to Support Enterprise AI

In the wake of Andrew Ng being appointed Amazon's Board of Directors, we consider his career from education towards artificial general intelligence (AGI)

GPT-4 Turbo: OpenAI Enhances ChatGPT AI Model for Developers

OpenAI announces updates for its GPT-4 Turbo model to improve efficiencies for AI developers and to remain competitive in a changing business landscape

Meta Launches AI Tools to Protect Against Online Image Abuse

Tech giant Meta has unveiled a range of new AI tools to filter out unwanted images via its Instagram platform and is working to thwart threat actors

Microsoft in Japan: Investing in AI Skills to Boost Future

Cloud & Infrastructure

Microsoft to Open New Hub to Advance State-of-the-Art AI

AI Strategy

SAP Continues to Develop its Enterprise AI Cloud Strategy

AI Applications