The use of AI by regulators
In the new world defined by rapid technological advancements and its ease of adoption, organisations are increasingly embracing AI. In addition, numerous startups are using AI to disrupt delivering competitive products and services to their customers. To navigate in this new world, it is imperative for regulators to recognise that traditional governance principles will no longer suffice, and in a worst-case scenario they may disfavour the group it is trying to safeguard. Regulators must harness the power of AI to modernise their oversight approach and promote responsible innovation.
This paper will delve into strategies for positioning regulators effective in the new world and beyond, fostering an ecosystem that embraces AI whilst providing the necessary guardrails to protect individuals.
How AI has started to impact regulatory affairs
We are already seeing regulatory bodies beginning to embrace AI in its functions. Until recently, the FCA has established a Data and Innovation office where their current Chief Data and Information and Intelligence Officer, Jessica Rusu at a speech at The Alan Turing Institute’s Framework for Responsible Adoption of Artificial Intelligence in the Financial Services Industry (FAIR) event said that innovation will lead to better AI regulation where she also raised an important question; “is clarification of the existing regulatory framework is enough to manage AI in UK financial sector, or whether a new approach is needed”. It is also worth mentioning that the FCA is investing in AI, for instance, it provides a digital sandbox where AI propositions and proof of concepts can be tested.
How AI can be beneficial to the regulators
Providing regulatory foresight in addition to oversight: The regulators have the capability to leverage AI to proactively assess market conditions and anticipate significant market events such as the collapse of Lehman Brothers, Silicon Bank, FTX, Evergrande etc. Regulators have the authority to request relevant data from companies and use AI models to evaluate their financial health in comparison to their peers. As an example, an AI model can be used for stress testing, simulating several stress scenarios such as an economic downturn to gauge the resilience of financial institutions. This proactive approach enables regulatory agencies to provide advanced warnings to the markets regarding the likelihood of such events.
Delivering holistic intelligence: Regulators have access to data from several businesses and their customers. This data is further enriched by publicly available information and potentially data sharing from regulatory partners. However, while some of this data is not accessible to any individual business, regulators can leverage it to create a comprehensive oversight. Data shared between financial regulating agencies and security agencies can utilise technologies such graph models and link analysis to provide a well-rounded understanding of terrorist financing.
Improving regulatory operating processes: AI can enhance the efficiency of regulatory processes thereby improving the quality of the outcomes. Natural language processing (NLP) models can be used to provide summarised, concise answers to specific inquiries, eliminating the need to painstakingly sift through volumes of documents. Financial institutions can also utilise Generative AI (GenAI) to gain insight into the applicability of regulation to their specific circumstances.
In essence, any business process regarding regulatory affairs carried out by a human that is costly and prone to error can leverage AI to improve efficiency and quality of such processes.
Emerging threats: Financial crime regulators are facing new and unfamiliar threats with traded assets such as cryptocurrencies, NFTs and digital wallets gaining popularity. Criminals are taking advantage of anonymity, pseudonymity, and global reach of such instruments. These new forms of payments have a different fingerprint to traditional forms were risk attributes like geographical location, currency, sender and beneficiary information are no longer relevant. AI can be used to analyse transactions going through a blockchain network to determine suspicious flow of funds. Similarly, Natural Language Processing (NLP) can be used to analyse cryptocurrency social media, websites, and forums to determine bad actors.
Success factors to implementing AI in a regulatory function
Divide between regulation and investigation: In many cases, the regulatory entities responsible for legislating the laws operate independently from the agencies tasked with investigating the cases resulting from these laws. Furthermore, there are situations where insight from investigations do not directly inform the legislative process, which, in turn, fails to provide guidance to financial institutions who are, by law, required to report cases. This can constrain the AI methodology used by financial institutions for instance, by incorporating nights and feedback from investigations, financial institutions leverage a supervised learning approach to develop prescriptive models such as money laundering and child trafficking models, whereas, without feedback, limited to the use of unsupervised learning. Furthermore, AI models need this feedback loop as an integral part of its learning process to reenforce or invalidate its recommendations.
Black box models: Several financial institutions are utilising black box models predominately neural network models. These models are capable of learning complexities in data which makes them compelling for financial crime. However, the lack of transparency regarding how the neural network arrives at its conclusions presents challenges when used in financial crime investigations, where transparency, fairness and the absence of bias are essential. This creates a paradoxical situation where regulators discourage the adoption of this type of models, leading financial institutions to opt for less predictive models, resulting in the increase in the quantity of low-quality cases and in a worst-case scenario, real criminal activities may go undetected.
Complexity of data from multiple sources: To be able to fully leverage AI, data sharing might be required. It may also be required to acquire publicly available data and data pertaining to the institution they are regulating. Some of these complexities such as quality of the data, a common data dictionary needs to be addressed by the regulators to achieve the promises of AI.
Conclusion
In summary, despite the progress regulators have made to innovate by adopting AI, there remain substantial gaps that need addressing. Regulatory bodies cannot afford to lag behind the industries they oversee, and crucially, they must stay ahead of the criminals whose methods evolve and adapt day-by-day.
To achieve this, a strong commitment to executing strategies with significant investment and a genuine aspiration to becoming an AI led organisation is imperative.