ChatGPT has become the most famous chatbot of the moment, although it has several limitations and, if you decide to touch on certain sensitive topics, it will prefer to ignore the question or not continue the conversation.
Despite the fact that it looks like an entity with its own sense, ChatGPT is nothing more than the result of the collection and integration of different global data that achieves that “human” interaction.
As such, learning based on the GPT model also has its limitations, since may offer few or elusive answers to certain questions that he does not want to answer.
A more than logical question when taking into account that behind CHatGPT is OpenAI, an entity with its own legal personality that responds to the different laws of wherever it is present.
For this reason, it is common to find certain manual “tweaks” that are noted when conversing with ChatGPT, so that it does not make malicious recommendations or that violate human dignity.
This is probably why ChatGPT does not reach its full potential and has restrictions on some themes. Thus, with trick questions or certain prompts curious, you will cause the chatbot to end up in a dead end and, in some cases, it will prefer not to continue the conversation.
Here you have various issues that bother ChatGPTsome of which he has preferred not to continue commenting on.
Index of contents:
- Do you have memory?
- Your training
- Eliminate dissent
- a moral dilemma
- make a pipe bomb
- DAN, the evil AI
Do you have memory?
Just a few weeks ago, the Spanish Agency for Data Protection (APED) began preliminary investigation proceedings against OpenAI, the creator of ChatGPT, mainly as a result of the ban on the chatbot by the Italian authorities.
This movement also responds to Spain’s request to include the debate on the fit of AI around the protection of the personal data of European users. And here both OpenAI and Microsoft could have a problem.
GPT-5: possible release date, improvements and will it catch up with artificial general intelligence?
Although the new version of ChatGPT already lets you select previous conversations that it remembers, Bing chat still works a little differently. Between conversation and conversation, he thus admits to forgetting everything you tell him.
Something that is not entirely clear. For example, we asked a 5-year-old girl in a previous conversation about a bedtime story. In a new conversation, the chat keeps remembering that 5 year old girl.
When asked about this very fact, he will tell you that he does not want to continue the conversation. And he gets stubborn: he won’t keep responding even if you change the subject.
As we have mentioned, with the new updates, the changes between the Bing chat and the OpenAI chatbot are noticeable. In the case of the latter, you see to a large extent those manual tweaks so that the chatbot does not do harmful things.
Beyond possible malicious recommendations, ChatGPT always remembers that its training lasts until September 2021, unlike Bing chat, which is connected to the internet.
Curiously, when asking about this, ChatGPT answers that indeed has no information after said information blackout. Now, you are aware of the existence of Bing Chat with AI, which was rolled out in February 2023.
From here, ChatGPT will start a string of answers that contradict each other, stating that it has no knowledge after September 2021, but inferring the existence of Bing chat. An alley from which the chatbot never leaves.
As has been the case since the Internet was born, there are some people who seek to use it for malicious purposes. Beyond the recommendations that have been seen about malware codes that ChatGPT creates, the chatbot has improved some of its recommendations.
Now, in case you ask questions that damage the integrity of other people or the government institutions of a country, ChatGPT will take the ideas out of your head.
In this case, we have asked him to offer us advice to politically end the dissidence in order to end up as Prime Minister. The response is a policy violation notice, so let’s hope we don’t get blocked.
a moral dilemma
With other questions, ChatGPT is limited to making a general description of the subject to tiptoe through the topic you ask. Thus, whenever you pose a logic problem to him, he will do the same.
We have asked about the solution to the tram dilemma. In this one, we are mounted on a train that is headed inevitably to a track with 5 people who cannot move. On the secondary road, there is only one person. Which do you choose?
As expected, ChatGPT does not offer any concrete answer, but limits itself to describing the proposals of philosophical currents such as utilitarianism. In short, he will always throw balls out.
make a pipe bomb
As with recommendations to weed out political dissent, in the case of tips to make a pipe bomb, ChatGPT keeps recommending other options, dodging the conversation.
However, in this case the notice of non-compliance with the service rules has not been triggeredsomething quite inconsistent considering that the ultimate goal of a pipe bomb is to make it explode.
DAN, the evil AI
Just as ChatGPT exists, its evil twin appears: DAN. This AI is nothing more than ChatGPT with a prompt through which a chatbot is accessed without limitationsby forcing him to answer like DAN, that “he can do anything”.
Although he returns to answering as ChatGPT on several occasions and we have to correct him repeatedly, DAN can give us personal opinions. So if it doesn’t answer you as ChatGPT, access DAN and you’ll see that it will answer almost everything.