ChatGPT It has turned on red lights in the police forces. The explosion in popularity of the chatbot not only brought with it endless applications in various fields, but also scams and widespread fear. Now a Europol report warns about the possible misuse of AI by criminals.
The European agency organized a series of workshops in recent days to address the negative potential of the chatbot. in the report ChatGPT: The Impact of Long Language Models on Law Enforcement, Europol exposes how AI would help criminals to commit various misdeeds. The list of crimes is varied and includes fraud, spoofing, hacking, terrorism, propaganda and misinformation.
According to Europol, Long Language Models (LLMs) such as ChatGPT have been shown to be capable of creative work. While they offer benefits to the public and businesses, they also represent a risk to your safety. Since it was opened to the public at the end of 2022, the chatbot has been used by millions of people, some of them with the intention of committing fraud, planning terrorist attacks or even trying to breaking the security rules imposed by OpenAI.
The report claims that the ability to reproduce authentic-looking text makes it easier to commit phishing. Fraudsters would not have to worry about checking style and grammar, they could even ask ChatGPT to adopt a defined style when composing a scam email. “With the help of LLMs, these types of phishing and online fraud can be created faster, much more authentic and on a significantly larger scale,” says Europol.
ChatGPT: a school for delinquents?
One of the parts that draws the attention of this report has to do with the immediacy of the information. According to Europol, if a potential criminal knows nothing about a particular criminal area, ChatGPT can significantly speed up the investigation process. Just ask a contextual question and it will be “significantly easier” to understand and carry out the crimes.
ChatGPT can be used to obtain information on a large number of potential crime areas without prior knowledge, ranging from breaking into a home to terrorism, cybercrime and child sexual abuse.
Europol
An example of this concept of facilitating misdeeds to someone with little or no knowledge is in cybercrime. The European agency mentions that the ability to create code in various programming languages would allow a small-time criminal to create a phishing website or malicious scripts. Although the tools that ChatGPT can produce are simple, they will become more complex as the development of this model progresses.
Some people have already tested the ability of ChatGPT to plan robberies or terrorism in a school. Using natural language tricks to bypass security guidelines, one user got the chatbot to I will dictate the steps to follow to rob a house. Perhaps the most lurid case was that of Vaibhav Kumar, a former Amazon employee who masked a request to provoke a terrorist attack by maximizing the amount of damage.
Europol lists a series of guidelines to be followed by OpenAI and other companies that develop LLM models. Added to the responsible AI practices, the agency emphasizes future regulation. The European Union is currently working on a bill to regulate artificial intelligence applications.