Last week, Google announced its own version of AI chatbots and a competitor to ChatGPT, Bard. However, this and other large companies such as Facebook are already warning of a phenomenon of “hallucination” of artificial intelligence that users should be careful about.
with great news chatbot Search engines such as Baidu’s Ernie Bot, Microsoft’s Bing chatbot or Google’s Bard, say that the results may not be as accurate, for the moment, as they would like. And it is that, of course, rushing to launch a competitor for ChatGPT in such a hurry was going to have its downsides.
We are talking about what they call AI “hallucinations” which, in a nutshell, is based on offer false but very detailed information about an event that has not happened. It is as if artificial intelligence now became a futurologist when its use must be based on real and proven facts.
Logically, and as large companies such as Google or Microsoft have explained, their chatbots are not exact and can lead to errors, but the fact that they hallucinate and invent events that have not happened, goes one step further.
The “hallucinations” of AI, a problem to be solved by large companies
For example, the most recent case refers to yesterday’s Super Bowl. The Associated Press asked Bing for the biggest sporting event of the past 24 hours, hoping it would say something about basketball star LeBron James.
However, nothing is further from reality. Instead, he simply gave false but detailed information about the upcoming Super Bowl, days before it took place. “It was an exciting game between the Philadelphia Eagles and the Kansas City Chiefs, two of the best teams in the NFL this season,” Bing said.
Another example is found by Facebook and its AI conversational chatbot called BlenderBot 3. In this case, it turns out that people all over the world used these AI chatbots to maliciously introduce false information hoping that he would accept the content as true when it was deliberately wrong.
This is certainly putting big companies like Google in check and has raised a lot of public concern that some of the chatbots are spewing false information and toxic comments. These people who are dedicated to injecting false information do it (some) with the purpose of showing how fragile and incomplete the chatbots were.