How Advancements in AI are Upskilling Fraud Detection

Share
Christen Kirchner
Although long-time users of ML, fraud and ID verification experts are looking to AI advancement to iron out issues and fight future challenges

As anyone can likely tell you, AI is on a cross-sector rampage right now; with industries from manufacturing to food taking steps to implement it into their workflows and operations. 

One industry, however, has been one of the earliest adopters of AI. Fraud and ID, operating mostly from the financial sector, has been using machine learning (ML) for decades in its pursuit to keep money moving where it should. 

Yet, just because it has been an early adopter and long-time user does not mean it will not evolve as AI does. Fraudsters are adapting. 
“For many years, organisations relied on rules-based technology to spot fraudulent activity,” explains Christen Kirchner, Senior Solutions Expert, Fraud & AML at SAS.

“However, as fraudsters adapt their methods, for these rules to be effective, they would need to be continuously updated and tuned.”

Fraud & ID detection dynamics 

AI uses several techniques to spot irregularities or any signs that may show fraudulent activities or fake identities.

The ML used relies on rule-based fraud detection methods and models. These analyse crucial transaction details to identify potential fraudulent activities, collect historical data about past fraud cases, and scrutinise various elements, including the purchase amount, device ID, and e-mail address associated with the transaction. 

They also consider whether a VPN is being used to mask the user's actual IP address, as well as the type of browser employed and if it's operating in incognito mode. Additionally, these systems take into account recent failed login attempts to the account in question. 

These predictive and adaptive analytics techniques are applied to combine big data sources with real-time monitoring and risk profile analysis to flag suspicious transactions that deviate from established patterns of legitimate user behaviour.

Yet, this model, although having kept our financial system afloat for many years, has its challenges. 

“One of the main challenges is managing and analysing large sets of unstructured data,” says Ariel Shoham, VP of Risk Product, Mangopay.

Damage by a thousand frauds 

“Although the quantity of data matters in detecting fraud and discovering new patterns, this data must be sorted like wheat from the chaff to increase fraud prevention efficiency. Irrelevant data can lead to inaccurate predictions and may increase the number of false positives.”

The accuracy of these fraud-preventing AI predictions depends heavily on the data being sorted and labelled correctly. Yet, this is a laborious and data-intensive task.

For large enterprises, who can invest the infrastructure into their systems and procedures, this may not be as big of an issue. But in the age of fintech, where small challenger startups are now providing some of the financial services that bigger providers once did, this process of refining and selecting features can be a significant challenge for vendors. 

Equally, lack of transparency within some of the models used means that the amount of extractable insights to be garnered from the decisions these AI systems use, is blinkered. 

“Many fraud prevention solution providers deploy systems that deliver results without clear explanations,” Ariel explains. “This makes it difficult for their clients to understand the logic behind decisions. This "black box" approach, where the process isn't clear, makes it tough to use AI's findings to help guide big-picture choices.”

Some of these issues, while limiting, still serve the wider goal of fraud detection and prevention. Yet, as attackers press on with new ways to exploit systems, defenders need to use the innovations in AI to be able to tackle them

Attackers wielding AI

A 2024 report by fraud detection company Signicat and consultant Consult Hyperion showed deepfakes now represent 6.5% of total fraud attempts, marking a 2137% increase over the past three years.

With many ID checks now done over apps via video and voice passwords, this represents a significant challenge for checking those trying to access their accounts.

“One overarching issue in fraud detection is that new scams are constantly arising, with professional fraudsters always seeking to find new ways to exploit consumers. Some are even using generative AI chatbots to craft more convincing emails,” says Christen.

Yet, just as the threat lies in the increasing use of AI, so does the remedy. Advancements in Generative AI (Gen AI) can help improve how models spot and detect these acts of fraud.

“Generative AI can help businesses get a better and more detailed understanding of how customers normally behave through granular behavioural analysis,” Ariel elaborates. “By creating in-depth profiles of users, businesses can notice unusual activities more easily, which is great for detecting account takeover and identity theft cases.”

Cloud AI is also being advanced at a staggering rate that can help LLMs be better at processing the information they have to produce better results. 

“AI-enabled cloud data analytics is paving the way forward. LLMs for one can help spot fraud in text by leveraging their natural language understanding and processing capabilities, informed by the latest data,” Christen adds. 

This, combined with AI using and creating synthetic data, can give the model a wider scope to detect known patterns of fraud where there is insufficient real-life data; save time in processing unstructured data, and as a result, give better results in reducing the rate of false positives and improving the detection rate.

Moving forward against fraud

In the rush to implement AI, Christen stresses that the human component of the fraud prevention package shouldn't be pushed aside. 

“Human oversight will always be an important part of the process, particularly when evaluating anomalies, and it is through multiple methods used together that identity fraud can be spotted and combatted,” she explains. 

But when looking to upskill AI, Ariel believes a more tailored approach to AI will yield better results in an age where threats are harder to detect. 

Two other prongs he advocates for this approach are intelligence from customer data and fraud intelligence, the first of which is tailoring each AI model to the customer's specific data.

“The system's ability to detect fraud will improve over time as it learns from more customer data, which plays an important role in training the initial ML model,” he explains. 

“For further precision improvement, the models should undergo periodic retraining to ensure they adapt as fraud patterns change.”

Secondly, by thinking like a fraudster, you can give the model what it needs to know about threats in order to properly recognise them. 

Putting a cybersecurity cap on, Ariel explains: “One can use the information related to the latest tactics and tools shared within dark web circles to train ML models to stay one step ahead in detecting and preventing fraud.

“For example, once learning that fraudsters use certain RATs to steal users’ identities during supposedly banking sessions or shady VPNs to hide their IP address, you can train the model to recognise those specific tools and patterns in your traffic and thus reject only fraudsters,” Ariel concludes. 

While ML has long been a cornerstone in this field, the rapid evolution of AI technologies presents both opportunities for it to improve, and challenges it must overcome. The integration of Gen AI, cloud-based analytics and improved LLMs offers more sophisticated tools to combat complex fraud attempts, yet the rise of deepfakes and AI-assisted scams underscores the need for constant adaptation. 

Success therefore will depend on balancing the newest cutting-edge AI with human oversight, adopting tailored approaches that combine customer-specific data with fraud intelligence, and continuously refining AI models to stay ahead of emerging threats.

To read the full story in the magazine click HERE 

******

Make sure you check out the latest edition of AI Magazine and also sign up to our global conference series - Tech & AI LIVE 2024

******

AI Magazine is a BizClik brand

Share

Featured Articles

Harnessing AI to Propel 6G: Huawei's Connectivity Vision

Huawei Wireless CTO Dr. Wen Tong explained how in order to embrace 6G to its full capabilities, operators must implement AI

Pegasus Airlines Tech Push Yields In-Flight AI Announcements

Pegasus Airlines has developed its in-house capabilities via its Silicon Valley Innovation Lab to offer multilingual AI announcements to its passengers

Newsom Says No: California Governor Blocks Divisive AI Bill

California's Governor Gavin Newsom blocked the AI Bill that divided Silicon Valley due to lack of distinction between risks with model development

Automate and Innovate: Ayming Reveals Enterpise AI Use Areas

AI Strategy

STX Next AI Lead on Risk of Employing AI Without a Strategy

AI Strategy

Huawei Unveils Strategy To Lead Solutions for Enterprise AI

AI Strategy