Chat GPT and similar language models are exploding in popularity and Europol is now warning of an increase in phishing, disinformation and other cybercrimes.
The European Union Agency for Law Enforcement Cooperation, Europol, has released a new report focusing on language models such as Chat GPT and similar. These services have become increasingly popular not only among law-abiding individuals but also among cyber criminals.
“As the capabilities of LLMs such as ChatGPT are actively being improved, the potential exploitation of these types of AI systems by criminals provide a grim outlook”, Europol believes
The report identifies three areas in particular where these large-scale language models can be used to cause harm. First, the language models can be used for phishing, where Chat GPT’s ability to generate text that sounds believable can increase the risk to potential victims.
Second, language models can be used to create and spread disinformation, which is also an area of concern. Thirdly, language models can be used to create malicious code, even if the user lacks knowledge of coding.
Europol’s report looks at a dark future characterized by language models. Although tools like Chat GPT are relatively basic, future language models are expected to have access to more data and be able to solve more sophisticated problems, increasing the risk of malicious actors causing problems.