‘The dark side’: How scammers are utilising the power of AI

European law enforcement agency Europol has produced a new report outlining how criminals and bad actors might abuse large language models like ChatGPT

Artificial intelligence and machine learning clearly have the capacity to revolutionise the way individuals and organisations across the globe go about their daily business. 

It’s a notion that has become evermore entrenched in mainstream thinking over the past few months thanks to large language models (LLM) like OpenAI’s ChatGPT, to which millions of people have now been exposed. 

But with the power and capability of LLMs comes danger, as European law enforcement agency Europol has been keen to point out. 

In response to the growing public attention being given to ChatGPT, the Europol Innovation Lab organised several workshops led by experts to explore how criminals might abuse LLMs to overcome hurdles which have long hindered them. 

The result was a ‘Tech Watch Flash’ report providing an overview of the potential misuse of ChatGPT.

Europol is a European law enforcement agency. Picture: Europol

Europol report outlines dangers of LLMs

In its report, Europol highlights the dark side of generative AI like ChatGPT, which is providing an opportunity for criminals and bad actors to “exploit LLMs for their own nefarious purposes.”

The policing organisation also warned of a “grim outlook” given the inevitable improvements of such tools over the coming years. 

Three areas of crime were given as the main concerns identified by Europol experts:

  • Fraud and social engineering: ChatGPT’s ability to draft highly realistic text makes it a useful tool for phishing purposes. The ability of LLMs to re-produce language patterns can be used to impersonate the style of speech of specific individuals or groups. This capability can be abused at scale to mislead potential victims into placing their trust in the hands of criminal actors.
  • Disinformation: ChatGPT excels at producing authentic sounding text at speed and scale. This makes the model ideal for propaganda and disinformation purposes, as it allows users to generate and spread messages reflecting a specific narrative with relatively little effort.
  • Cybercrime: In addition to generating human-like language, ChatGPT is capable of producing code in a number of different programming languages. For a potential criminal with little technical knowledge, this is an invaluable resource to produce malicious code. 

Europol added that it would become increasingly important for law enforcement agencies to stay up to speed with the progression of technology, allowing them to anticipate and prevent abuse of LLMs. 

Read the full report: ChatGPT – The impact of Large Language Models on Law Enforcement

Share

Featured Articles

Andrew Ng Joins Amazon Board to Support Enterprise AI

In the wake of Andrew Ng being appointed Amazon's Board of Directors, we consider his career from education towards artificial general intelligence (AGI)

GPT-4 Turbo: OpenAI Enhances ChatGPT AI Model for Developers

OpenAI announces updates for its GPT-4 Turbo model to improve efficiencies for AI developers and to remain competitive in a changing business landscape

Meta Launches AI Tools to Protect Against Online Image Abuse

Tech giant Meta has unveiled a range of new AI tools to filter out unwanted images via its Instagram platform and is working to thwart threat actors

Microsoft in Japan: Investing in AI Skills to Boost Future

Cloud & Infrastructure

Microsoft to Open New Hub to Advance State-of-the-Art AI

AI Strategy

SAP Continues to Develop its Enterprise AI Cloud Strategy

AI Applications