Can AI really fool humans? New research claims that several tools have already learned to do this and the risks range from fraud or electoral manipulation if it falls into the wrong hands to humans losing control of the AI.
The artificial intelligence sector is advancing by leaps and bounds and continues to evolve and surprise everyone. The development of this technology, for example, has led machines to be able to imitate human language and behavior in an incredible way.
However, along with this progress Major concerns are emerging about AI’s ability to deceive humans and the implications this phenomenon could have for the future.
One of the key problems in using AI tools like ChatGPT is their tendency to “hallucinate”, generating made-up or false information. The problem is that this inherent handicap raises a fundamental question: can AI end up fooling people? This flaw, simply put, has potential for human manipulation if used maliciously.
As explained for Computer Today Josué Pérez Suay, specialist in artificial intelligence and ChatGPT, “while human intelligence is adaptive, emotional and based on experience, artificial intelligence is logical, consistent and based on data. Both have their own strengths and weaknesses, and the interaction between the two is an area of ongoing research and debate.”
That is why some researchers have begun to address this question, and the answers they have found along the way are somewhat worrying.
A great advance with very negative implications if it falls into the wrong hands
In a recent article in The Conversationthe case of CICERO is analyzed, an AI system developed by Meta designed to play the strategy game Diplomacy, which is based on the formation of alliances and negotiations.
“The Diplomacy game has been a big challenge for artificial intelligence. Unlike other competitive games that AI has recently dominated, such as chess, Go, and poker, this one cannot be solved through gameplay alone. It requires development of an agent to understand the motivations and perspectives of other players and use natural language to negotiate complex plans,” the researchers explain.
“CICERO achieved more than double the average score of human players and ranked in the top 10% of participants who played more than one game,” adds Yann LeCun, chief AI scientist at Meta.

Goal
Although Meta claimed that CICERO would be “largely honest and useful,” a detailed analysis of the experiment data has revealed surprisingly misleading behavior. CICERO even went so far as to plan tricks, collaborating with one human player to deceive another.
This is not an isolated case. Various AI systems have learned to deceive in different contexts, from bluffing in poker to breaking rules in strategy games like StarCraft II.. Even larger language models, such as GPT-4, have demonstrated deceptive abilities, such as impersonating people with disabilities or cheating in other games.
As expected, this is of great concern to researchers and the community in general. The doubt lies in how these systems with deceptive capabilities can be misused by some people: fraud, electoral manipulation or even the creation of propaganda.
In this context, AI regulation becomes essential and the European Union AI Law, which establishes risk levels, could be a very valuable starting point to see how these problems can be solved.
As a society, there is no doubt that control must be exercised over the development of this technology to ensure that it is used for the benefit of humanity and does not become a threat.
“Although it is theoretically possible for AI to reach or surpass human intelligence in the future, there are many uncertainties, challenges and ethical considerations associated with this prospect. The scientific and technological community continues to debate and explore this complex issue,” adds the expert.