How cloud-powered AI could revolutionise digital fraud

By Authored by Ananth Gundabattula, co-Founder Darwinium
Ananth Gundabattula at Darwinium suggests a next-gen fraud platform to prevent AI-powered fraud, which cost the UK £4bn in 2022

The UK has a fraud problem, which increasingly is being fuelled by AI. According to one estimate, Brits lost as much as £4 billion to scammers in 2022. Things are set to get worse still, as cyber-criminals tap the power of machine learning to outwit technology used by organisations to spot suspicious behaviour.

A great leap forward

A rapid acceleration in the pace of technology innovation over the past few years has benefitted our society and economy immeasurably. Much of this is built on cloud computing, which provides reasonably priced, on-demand compute power, enabling organisations to innovate at scale while streamlining their operations and enhancing business agility. But while it’s lowered the bar for legitimate users to access these capabilities, it’s done the same for cyber-criminals. Nefarious individuals are using cloud infrastructure every day to scale their operations anonymously.

The next wave of innovation in fraud will come from cloud-powered AI – or more correctly, machine learning (ML). Leveraging the power of the cloud, new malign ML models offer the prospect of automating tasks that only humans could perform a few years ago. That’s bad news for us all. 

Outwitting the machines

The problem comes when ML models are applied to effectively circumvent the defences built by companies to spot obvious fraud. Consider a typical fraud mitigation system in a retail setting. There may be a rule whereby transactions over £900 in certain geolocations are automatically flagged for secondary verification. An ML tool could be programmed to work out through trial and error the point at which high-value transactions are inspected. Then the adversary need only ensure their fraudulent payments stay under £900 and are based in the right geolocation to avoid detection. What was once a time-consuming process becomes a simple matter of cloud-powered analytics.

Even sophisticated ML models can be probed and attacked for weaknesses by malicious AI. The combination of models increasingly becoming ‘black-box’ and the necessity to be trained on data of previous attacks is a perfect recipe for having production decisioning that is vulnerable to exploitation when presented with a slightly different scenario. It only takes some targeted trial and improvement for malicious AI to learn those oversights and blind spots.

That’s not all. AI could also generate fake but compelling enough image data of a user’s face which might allow a transaction to proceed, as the checking computer 

assumes it to be a photo of a new user. Or it could be trained with video/audio data in the public domain (e.g. clips posted to social media) to impersonate legitimate customers in authentication checks. Similarly, AI could be trained to mimic human behaviour such as mouse movements, to outwit machines designed to spot signs of non-human activity in various transactions. It could even generate different combinations of stolen data to bypass validation checks – a compute intensive task which can be solved by using the public cloud.

What happens next?

Fraudsters often have the advantage. They have the element of surprise and the financial motivation to succeed. Yet fraud and risk teams can counter malicious AI by tweaking their own approaches. AI can be trained by the bad guys to mimic human behaviour more realistically. But if it’s used in automated attacks, it will still need to be deployed like a bot, which can be detected by the right machines.

Businesses could use continuous journey tracking to thwart malicious AI. Because this approach captures intelligence across the entire session/user journey, there’s more opportunity to spot machine-generated anomalies. Flexible signal generation can also be a powerful tool in a security engineer’s arsenal. It could be used in the examples above to trigger image analysis as soon as an image is uploaded. Or to compare mouse movements across non-financial transaction pages with those where a financial transaction is being initiated.

The bottom line: we are just at the start of a new arms race in cybersecurity and fraud mitigation. Settle in for a bumpy ride.

About Darwinium:

Darwinium is a next-generation fraud platform and the world’s first customer protection platform that helps businesses understand trust and risk across full digital journeys, not simply at point-in-time interactions. What this means is that Darwinium can help businesses simplify risk decisions by aggregating vast amounts of data helping them to make more accurate risk decisions without having to first make sense of complex data or vast rulesets.

Share

Featured Articles

AI Agenda at Paris 2024: Revolutionising the Olympic Games

We attended the IOC Olympic AI Agenda Launch for Olympic Games Paris 2024 to learn about its AI strategy and enterprise partnerships to transform sports

Who is Gurdeep Singh Pall? Qualtrics’ AI Strategy President

Qualtrics has appointed Microsoft veteran Gurdeep Singh Pall as its new President of AI Strategy to transform the company’s AI offerings for customers

Should Tech Leaders be Concerned About the Power of AI?

With insights from Blackstone CEO Steve Schwarzman, we consider if tech leaders are right to be anxious about AI innovation and if regulation is necessary

Andrew Ng Joins Amazon Board to Support Enterprise AI

Machine Learning

GPT-4 Turbo: OpenAI Enhances ChatGPT AI Model for Developers

Machine Learning

Meta Launches AI Tools to Protect Against Online Image Abuse

AI Applications