‘The dark side’: How scammers are utilising the power of AI

European law enforcement agency Europol has produced a new report outlining how criminals and bad actors might abuse large language models like ChatGPT

Artificial intelligence and machine learning clearly have the capacity to revolutionise the way individuals and organisations across the globe go about their daily business. 

It’s a notion that has become evermore entrenched in mainstream thinking over the past few months thanks to large language models (LLM) like OpenAI’s ChatGPT, to which millions of people have now been exposed. 

But with the power and capability of LLMs comes danger, as European law enforcement agency Europol has been keen to point out. 

In response to the growing public attention being given to ChatGPT, the Europol Innovation Lab organised several workshops led by experts to explore how criminals might abuse LLMs to overcome hurdles which have long hindered them. 

The result was a ‘Tech Watch Flash’ report providing an overview of the potential misuse of ChatGPT.

Europol is a European law enforcement agency. Picture: Europol

Europol report outlines dangers of LLMs

In its report, Europol highlights the dark side of generative AI like ChatGPT, which is providing an opportunity for criminals and bad actors to “exploit LLMs for their own nefarious purposes.”

The policing organisation also warned of a “grim outlook” given the inevitable improvements of such tools over the coming years. 

Three areas of crime were given as the main concerns identified by Europol experts:

  • Fraud and social engineering: ChatGPT’s ability to draft highly realistic text makes it a useful tool for phishing purposes. The ability of LLMs to re-produce language patterns can be used to impersonate the style of speech of specific individuals or groups. This capability can be abused at scale to mislead potential victims into placing their trust in the hands of criminal actors.
  • Disinformation: ChatGPT excels at producing authentic sounding text at speed and scale. This makes the model ideal for propaganda and disinformation purposes, as it allows users to generate and spread messages reflecting a specific narrative with relatively little effort.
  • Cybercrime: In addition to generating human-like language, ChatGPT is capable of producing code in a number of different programming languages. For a potential criminal with little technical knowledge, this is an invaluable resource to produce malicious code. 

Europol added that it would become increasingly important for law enforcement agencies to stay up to speed with the progression of technology, allowing them to anticipate and prevent abuse of LLMs. 

Read the full report: ChatGPT – The impact of Large Language Models on Law Enforcement


Featured Articles

ABBYY partner with Arsenal Women to offer AI solutions

Digital solutions provider ABBYY becomes Arsenal Women’s first official intelligent automation partner to offer expertise in business transformation

SAP announces Joule, its enterprise generative AI assistant

SAP's enterprise generative AI chatbot Joule is company's latest addition to its enterprise offering, promising to transform the way businesses run

Virgin Atlantic accelerates AI transformation with Amperity

Leading enterprise customer data platform will help Virgin Atlantic leverage a data-driven strategy to deliver highly personalised customer experiences

Sustainability LIVE: Event for AI leaders

AI Strategy

VMware and NVIDIA AI Foundation unlocks business potential

Machine Learning

TimeAI Summit Oct 2023 to unite tech giants and visionaries