Artificial intelligence has evolved a lot in recent years with ChatGPT at the forefront. This chatbot offers many features to generate really detailed responses on almost any topic. All of this is backed by thorough training by OpenAI which is improving little by little, currently having the latest version which is GPT-4.
But although from the outside it seems that they are all really wonderful functions, the truth is that some people can take the most negative side. And one of them is precisely that it can be a great help to carry out social engineering with the aim of extracting personal data from specific users.
THEFT OF DATA and UNSOLICITED PACKAGES WHAT is BRUSHING?
ChatGPT can be used to impersonate multiple companies
Something characteristic of ChatGPT is that the answers it offers are convincing, since they can have a really professional tone. And this is precisely what those people who are dedicated to stealing data from other users with an email or a simple text message are looking for. This is because one of the signs to detect that we have received a phishing message is the tone or misspellings, but ChatGPT can impersonate a company without problems.
This is precisely what they have gathered from CheckPoint by doing various tests with ChatGPT. In this they detail that as it is an open source model, any person with computer knowledge You will be able to train it with different original emails from the companies you want to impersonate. In this way, the AI will capture the typical vocabulary and also the tone that they use to create phishing emails efficiently in order to get all the personal data that is sought.
A similar request may be related to technical discussions with technical support services. In this case, the AI can be requested to answer typical questions that a person concerned with their safety may ask to the request for passwords so that it offers a convincing answer.
This opens up a great debate around artificial intelligence and the uses that can end up being given to it by users. And it is that although sometimes they can develop characteristics that make it resemble the vocabulary of a human, you don’t think about the human that can be put behind. In a few years when these AIs evolve much more, it will be a big problem since they cannot easily detect the messages that reach the mobile or the email inbox.
In Genbeta | There’s Life Beyond ChatGPT: Five Very Useful Text Creation AIs Explained