Can we blindly trust artificial intelligence? When it comes to information retrieval, it seems not. In any case, this is the lesson that we can learn following the media scandal that occurred in the United States a few days ago.
It all started on CNN which tried to set out historical precedents for family presidential pardons (in reference to Joe Biden who pardoned his son even though he had assured that he would never do so). Ana Navarro-Cardenas, a well-known figure on the channel, confidently spoke of the pardon granted by President Woodrow Wilson to her alleged brother-in-law, a certain Hunter deButts. This information, taken up by several media outlets, quickly spread on social networks.
Faced with questions about her sources, the commentator responded bluntly: “Take it over ChatGPT”. A response that sparked an in-depth investigation, revealing not only that Hunter deButts never existed, but also that other widely reported “historical facts” were just as fictitious. Among them, the alleged pardon of George HW Bush to his son Neil, or that of Jimmy Carter to his brother Billy.
Don’t believe AI
This case is only the tip of the iceberg. Research shows that generative AI tools like ChatGPT make errors more than three-quarters of the time when it comes to citing accurate sources. An alarming observation when we know that these tools are increasingly used by journalists, researchers and students.
The case of Jeff Hancock, founder of the Stanford Social Media Lab and recognized expert on disinformation, is particularly revealing. He himself was tricked into using GPT-4 to generate a list of bibliographic references, ending up with non-existent citations in an official document. If even experts can fall into the trap, what about the general public?
A systemic problem that threatens information
The fundamental difference between traditional search engines and AI-based “response engines” lies in their approach. Google Search classic links to primary sources that the user can consult and verify. In contrast, generative AIs produce answers that appear coherent but are often impossible to verify.
This new reality poses a major problem: the ease of use of these tools encourages intellectual laziness. Why spend time checking sources when an AI gives us an immediate and apparently credible answer? This trend contributes to the general deterioration of our information environment, already undermined by disinformation on social networks.
The consequences are felt well beyond the academic world. From seemingly innocuous errors, like Google’s AI claiming that male foxes are monogamous, to serious misunderstandings of current events, our entire ability to distinguish fact from fiction is under threat. So while waiting for ChatGPT to improve, it’s better to rely on good old Google and your ability to cross-reference information.
- A cascade of misinformation about US presidential pardons has exposed the dangers of relying on generative AI for information retrieval.
- Tools like ChatGPT are wrong in more than 75% of cases when it comes to citing sources, even experts are victims.
- Unlike traditional search engines, generative AIs cannot go back to primary sources, setting a dangerous precedent for our ability to verify information.