IOSCO Publishes Guidance on Supervising AI/Machine Learning

The Board of the International Organization of Securities Commissions (IOSCO) has published guidance to help its members regulate the use of AI and ML

Artificial intelligence (AI) and machine learning (ML) are increasingly used by orgnaisations due to a combination of increased data availability and computing power. Although the use of this technology may have benefits, such as speed and accuracy, it may also create or amplify certain risks. 

The Board of the International Organization of Securities Commissions (IOSCO), which is an international body for regulators in securities markets, has published guidance to help its members regulate and supervise the use of artificial intelligence (AI) and machine learning (ML) by market intermediaries and asset managers, following a consultation report published in June and the growth of technology in the financial sector. 

 

What does the guidance include? 

 

The report notes that the rise in the use of electronic trading platforms and the increasing availability of data have led firms to progressively use AI and ML in their trading and advisory activities, and risk management and compliance functions. Consequently, regulators are focusing on the use and control of AI and ML in financial markets to mitigate the potential risks and prevent consumer harm.

IOSCO encouraged regulators to require market intermediaries and asset managers that use AI and ML to do the following:

  1. Ensure senior management oversees the development and controls of AI and ML, including a documented internal governance framework for accountability
  2. Repeatedly validate the results of their uses of AI and ML to confirm (i) expected behavior in stressed and unstressed market conditions and (ii) compliance with regulatory obligations
  3. Have the expertise necessary to understand and challenge the produced algorithms, and to conduct due diligence
  4. Have a service level agreement that sets the scope of the outsourced functions with clear performance indicators, and rights and remedies for poor performance;
  5. Disclose meaningful information as to their AI and ML use (and regulators should determine the information they need from firms for appropriate oversight)
  6. Have controls in place to ensure that the data on which AI and ML is dependent prevents biases and otherwise considers ethical aspects of the use of the technology, such as privacy, accountability, explainability and auditability.

IOSCO noted that members and firms should "consider the proportionality of any response" when seeking to implement such measures, adding that the regulatory framework may need to "evolve in tandem to address the associated emerging risks."

 

Share

Featured Articles

ABBYY partner with Arsenal Women to offer AI solutions

Digital solutions provider ABBYY becomes Arsenal Women’s first official intelligent automation partner to offer expertise in business transformation

SAP announces Joule, its enterprise generative AI assistant

SAP's enterprise generative AI chatbot Joule is company's latest addition to its enterprise offering, promising to transform the way businesses run

Virgin Atlantic accelerates AI transformation with Amperity

Leading enterprise customer data platform will help Virgin Atlantic leverage a data-driven strategy to deliver highly personalised customer experiences

Sustainability LIVE: Event for AI leaders

AI Strategy

VMware and NVIDIA AI Foundation unlocks business potential

Machine Learning

TimeAI Summit Oct 2023 to unite tech giants and visionaries

Technology