AI in Financial Fraud: Deepfake attacks soar by over 2000%

The report unveils a concerning shift in the tactics employed by fraudsters
A report by Signicat and Consult Hyperion show deepfakes now represent a 6.5% of total fraud attempts, marking a 2137% increase over the past three years

A new report by digital identity company Signicat and consultancy Consult Hyperion has revealed an alarming trend: Over a third of fraud attempts targeting financial institutions now use AI. 

The research highlights the rapidly evolving threat landscape, with fraud prevention decision-makers agreeing that AI will drive almost all identity fraud in the future, leading to more victims than ever before.

Alarmingly, around three-quarters of organisations cite a lack of expertise, time, and budget as hindering their ability to detect and combat AI-driven fraud.

This lack of expertise and organisational interest is particularly alarming when compared to one statistic from the report: deepfake fraud attempts saw a 2137% increase over the past three years.

Taking over accounts

The report unveils a concerning shift in the tactics employed by fraudsters. Three years ago, AI was primarily used to create new or synthetic identities and forge documents.

Today, AI is being employed more extensively and at scale for deepfakes and social engineering attacks. 

Account takeovers, once considered primarily a consumer issue, have become the most common fraud type for business-to-business organisations.

Fraudsters exploit weak or reused passwords to compromise existing accounts, often using deepfakes to impersonate the account holder.

Data on deepfakes

Deepfakes, which use AI to generate realistic but fabricated audio and video content, now represent a staggering 6.5% of total fraud attempts, marking a 2137% increase over the past three years.

The World Economic Forum last year reported that the banking sector is particularly concerned by deepfake attacks, with 92% of cyber practitioners worried about its fraudulent misuse.

The high cost of deepfake fraud is also felt across other industries. In 2023, 26% of smaller and 38% of large companies experienced deepfake fraud resulting in losses of up to US$480,000.

Deepfake fraud is not just an issue for small companies, though. Last year, the UK's Financial Conduct Authority (FCA) sounded the alarm over the risks associated with 'deepfake' fraud.

Managing Director of the FCA, Nikhil Rathi, stated that AI could disrupt the financial services sector in "ways and at a scale not seen before."

And such scale was just recently seen. In May 2024, the engineering giant Arup fell victim to a deepfake fraud costing £20 million (US$25,486,000) after an employee was tricked into participating in a video conference featuring a digitally recreated version of the company's CFO.

This incident highlights the sophistication and potential impact of deepfake attacks on even the largest organisations.

Detecting deepfakes

Youtube Placeholder

While the threat of deepfakes fraud is growing, there are signs that can help identify these deceptive tactics.

Security company Kaspersky previously outlined several indicators of a deepfake video, such as unnatural blinking patterns, inconsistent lip movements, and background irregularities.

However, as AI technology continues to advance, so too do the capabilities of deepfakes, making them increasingly difficult to detect.

Fortunately, the same AI technology that can create deepfakes can also be leveraged to fight it.  

AI systems can be trained to spot the subtle anomalies and inconsistencies that may indicate a deepfake, providing a powerful defence against this emerging threat.

McAfee & Intel, for instance, have joined in a collaboration called Deepfake Detector for just that reason. Deepfake Detector uses things like audio detection to spot the subtle differences in real videos, and AI generated ones.

But in addition to leveraging AI for detection, organisations must implement robust procedures to prevent social manipulation at the individual level, which, according to cybersecurity firm Avast, is often the weakest link in the security chain.


Make sure you check out the latest edition of AI Magazine and also sign up to our global conference series - Tech & AI LIVE 2024


AI Magazine is a BizClik brand


Featured Articles

Sophia Velastegui: Overcoming Enterprise AI Challenges

AI Magazine speaks with AI business leader Sophia Velastegui as she offers advice for businesses seeking to advance their AI use cases responsibly

Bigger Not Always Better as OpenAI Launch New GPT-4o mini

OpenAI release new GPT-4o mini model designed to be more cost-efficient whilst retaining a lot of the same capabilities of larger models

Why are the UK and China Leading in Gen AI Adoption?

China and the UK are leading the adoption of Gen AI, which although sounds surprising to begin with, becomes clearer as you dig into their state strategies

Moody's Gen AI Tool Alerts CRE Investors on Risk-Posing News

Data & Analytics

AWS Unveils AI Service That Makes Enterprise Apps in Minutes

Data & Analytics

Jitterbit CEO: Confronting the Challenges of Business AI

AI Strategy