‘The dark side’: How scammers are utilising the power of AI

European law enforcement agency Europol has produced a new report outlining how criminals and bad actors might abuse large language models like ChatGPT

Artificial intelligence and machine learning clearly have the capacity to revolutionise the way individuals and organisations across the globe go about their daily business. 

It’s a notion that has become evermore entrenched in mainstream thinking over the past few months thanks to large language models (LLM) like OpenAI’s ChatGPT, to which millions of people have now been exposed. 

But with the power and capability of LLMs comes danger, as European law enforcement agency Europol has been keen to point out. 

In response to the growing public attention being given to ChatGPT, the Europol Innovation Lab organised several workshops led by experts to explore how criminals might abuse LLMs to overcome hurdles which have long hindered them. 

The result was a ‘Tech Watch Flash’ report providing an overview of the potential misuse of ChatGPT.

Europol is a European law enforcement agency. Picture: Europol

Europol report outlines dangers of LLMs

In its report, Europol highlights the dark side of generative AI like ChatGPT, which is providing an opportunity for criminals and bad actors to “exploit LLMs for their own nefarious purposes.”

The policing organisation also warned of a “grim outlook” given the inevitable improvements of such tools over the coming years. 

Three areas of crime were given as the main concerns identified by Europol experts:

  • Fraud and social engineering: ChatGPT’s ability to draft highly realistic text makes it a useful tool for phishing purposes. The ability of LLMs to re-produce language patterns can be used to impersonate the style of speech of specific individuals or groups. This capability can be abused at scale to mislead potential victims into placing their trust in the hands of criminal actors.
  • Disinformation: ChatGPT excels at producing authentic sounding text at speed and scale. This makes the model ideal for propaganda and disinformation purposes, as it allows users to generate and spread messages reflecting a specific narrative with relatively little effort.
  • Cybercrime: In addition to generating human-like language, ChatGPT is capable of producing code in a number of different programming languages. For a potential criminal with little technical knowledge, this is an invaluable resource to produce malicious code. 

Europol added that it would become increasingly important for law enforcement agencies to stay up to speed with the progression of technology, allowing them to anticipate and prevent abuse of LLMs. 

Read the full report: ChatGPT – The impact of Large Language Models on Law Enforcement

Share

Featured Articles

AI and Broadcasting: BBC Commits to Transforming Education

The global broadcaster seeks to use AI to make its education offerings personalised and interactive to encourage young people to engage with the company

Why Businesses are Building AI Strategy on Amazon Bedrock

AWS partners such as Accenture, Delta Air Lines, Intuit, Salesforce, Siemens, Toyota & United Airlines are using Amazon Bedrock to build and deploy Gen AI

Pick N Pay’s Leon Van Niekerk: Evaluating Enterprise AI

We spoke with Pick N Pay Head of Testing Leon Van Niekerk at OpenText World Europe 2024 about its partnership with OpenText and how it plans to use AI

AI Agenda at Paris 2024: Revolutionising the Olympic Games

AI Strategy

Who is Gurdeep Singh Pall? Qualtrics’ AI Strategy President

Technology

Should Tech Leaders be Concerned About the Power of AI?

Technology