‘The dark side’: How scammers are utilising the power of AI

Share
Europol has produced a report looking at criminal use of AI tools like ChatGPT
European law enforcement agency Europol has produced a new report outlining how criminals and bad actors might abuse large language models like ChatGPT

Artificial intelligence and machine learning clearly have the capacity to revolutionise the way individuals and organisations across the globe go about their daily business. 

It’s a notion that has become evermore entrenched in mainstream thinking over the past few months thanks to large language models (LLM) like OpenAI’s ChatGPT, to which millions of people have now been exposed. 

But with the power and capability of LLMs comes danger, as European law enforcement agency Europol has been keen to point out. 

In response to the growing public attention being given to ChatGPT, the Europol Innovation Lab organised several workshops led by experts to explore how criminals might abuse LLMs to overcome hurdles which have long hindered them. 

The result was a ‘Tech Watch Flash’ report providing an overview of the potential misuse of ChatGPT.

Europol is a European law enforcement agency. Picture: Europol

Europol report outlines dangers of LLMs

In its report, Europol highlights the dark side of generative AI like ChatGPT, which is providing an opportunity for criminals and bad actors to “exploit LLMs for their own nefarious purposes.”

The policing organisation also warned of a “grim outlook” given the inevitable improvements of such tools over the coming years. 

Three areas of crime were given as the main concerns identified by Europol experts:

  • Fraud and social engineering: ChatGPT’s ability to draft highly realistic text makes it a useful tool for phishing purposes. The ability of LLMs to re-produce language patterns can be used to impersonate the style of speech of specific individuals or groups. This capability can be abused at scale to mislead potential victims into placing their trust in the hands of criminal actors.
  • Disinformation: ChatGPT excels at producing authentic sounding text at speed and scale. This makes the model ideal for propaganda and disinformation purposes, as it allows users to generate and spread messages reflecting a specific narrative with relatively little effort.
  • Cybercrime: In addition to generating human-like language, ChatGPT is capable of producing code in a number of different programming languages. For a potential criminal with little technical knowledge, this is an invaluable resource to produce malicious code. 

Europol added that it would become increasingly important for law enforcement agencies to stay up to speed with the progression of technology, allowing them to anticipate and prevent abuse of LLMs. 

Read the full report: ChatGPT – The impact of Large Language Models on Law Enforcement

Share

Featured Articles

MoD Powers Up Data and AI Analytics with £50m Investment

Kainos has secured a £50m agreement to enhance the UK Ministry of Defence’s Defence Data Analytics Platform, supporting Royal Navy, Army and RAF operations

AI's Thirst for Water Raises Sustainability Concerns

In the wake of the UKs planned AI infrastructure boom, concerns are being raised over the potential impact more data centres could have on water supply

How Google’s AI Chip Upgrades Set Sustainability Standards

Google unveils new method for measuring environmental impact of AI hardware as processor efficiency gains deliver 70% decrease in carbon emissions

Google Makes Gemini 2.0 AI Model Available to Everyone

AI Applications

AI for Good: Why Microsoft is Using AI for Positive Change

AI Strategy

How Volvo's Autonomous Trucks are Using Gen AI

Technology