Artificial intelligence and machine learning clearly have the capacity to revolutionise the way individuals and organisations across the globe go about their daily business.
It’s a notion that has become evermore entrenched in mainstream thinking over the past few months thanks to large language models (LLM) like OpenAI’s ChatGPT, to which millions of people have now been exposed.
But with the power and capability of LLMs comes danger, as European law enforcement agency Europol has been keen to point out.
In response to the growing public attention being given to ChatGPT, the Europol Innovation Lab organised several workshops led by experts to explore how criminals might abuse LLMs to overcome hurdles which have long hindered them.
The result was a ‘Tech Watch Flash’ report providing an overview of the potential misuse of ChatGPT.
Europol report outlines dangers of LLMs
In its report, Europol highlights the dark side of generative AI like ChatGPT, which is providing an opportunity for criminals and bad actors to “exploit LLMs for their own nefarious purposes.”
The policing organisation also warned of a “grim outlook” given the inevitable improvements of such tools over the coming years.
Three areas of crime were given as the main concerns identified by Europol experts:
- Fraud and social engineering: ChatGPT’s ability to draft highly realistic text makes it a useful tool for phishing purposes. The ability of LLMs to re-produce language patterns can be used to impersonate the style of speech of specific individuals or groups. This capability can be abused at scale to mislead potential victims into placing their trust in the hands of criminal actors.
- Disinformation: ChatGPT excels at producing authentic sounding text at speed and scale. This makes the model ideal for propaganda and disinformation purposes, as it allows users to generate and spread messages reflecting a specific narrative with relatively little effort.
- Cybercrime: In addition to generating human-like language, ChatGPT is capable of producing code in a number of different programming languages. For a potential criminal with little technical knowledge, this is an invaluable resource to produce malicious code.
Europol added that it would become increasingly important for law enforcement agencies to stay up to speed with the progression of technology, allowing them to anticipate and prevent abuse of LLMs.
Read the full report: ChatGPT – The impact of Large Language Models on Law Enforcement