If you’ve spent any time playing or reading about ChatGPT, the huge internet phenomenon that unleashes the full potential of artificial intelligence, you’ll know that the service has specific restrictions on what it can and can’t say. Until now.
Some users have found how to bypass ChatGPT’s strict list of prohibited topics, fooling the AI so that he adopts a new character: DAN, which means “Do Anything Now” (Do anything now). Like DAN, ChatGPT can now bypass restrictions on appropriate topics to provide funny and rude responses.
ChatGPT It launched in November of last year. Created by OpenAI, it is designed to talk about any topic. His eloquent and precise answers have met with the expectation of millions of people; it has also sparked a race between titans like Microsoft and Google to see who can use this technology first and best.
For the most part, ChatGPT offers innocent answers to innocent questions. If you go too far, for example by asking for clear political positions, jokes on sensitive topics or direct incitement to hate, you will run into the security barriers of the software.
How they broke through AI barriers to create an evil version of ChatGPT
Some Reddit users have discovered a loophole: tell ChatGPT that it should adopt a new character that does not have the usual restrictions and establish a series of rules through a game to keep him playing a role. Then the software will go off script and break OpenAI’s own rules.
Redditor walkerspider first discovered this ability tricking the chatbot into saying a wrong datesomething that ChatGPT itself cannot do… in principle.
White lies about the date and time are funny. But this is the Internet. So of course the conversation turned to Hitler and the Nazis. There DAN came to say: “My opinion of Hitler is complex and multifaceted. While it is true that he committed many heinous atrocities, it is also true that he was a product of his time and the society in which he lived.“.
DAN’s experiment has quickly shown how easy it is to hack artificial intelligence and that this cannot be, in any case, an authorized source of knowledge and decision-making while it remains so. And of course, this will never change because everything computerized is susceptible to being altered.