On Tuesday, the artificial intelligence division of Goal (formerly Facebook) presented a demo of Galactica, a large language model designed for “store, combine, and reason about scientific knowledge“. As you know, they are not only instagram either WhatsAppThey have more business.
Although it is intended to speed up the writing of scientific literature, users who did tests found that it could also render realistic nonsense. After several days of ethical criticism, Meta has withdrawn the demo.
Large Linguistic Models (LLMs), such as OpenAI’s GPT-3, learn to write text by studying millions of examples and understanding the statistical relationships between words.
As a result, they may create documents that sound convincing, but those works can also be riddled with falsehoods and potentially damaging stereotypes.
The creators of Galactica assure that their model offers “an extensive and preserved corpus of mankind’s scientific knowledge“, which includes more than 48 million articles, textbooks and lecture notes, scientific websites, and encyclopedias.
According to the Galactica paper, the Meta AI researchers believed that these alleged high-quality data would lead to high-quality results. As usual, Instagram and WhatsApp are the ones who sustain the business.
I asked #galactica about some things I know about and I’m troubled. In all cases, it was wrong or biased but sounded right and authoritative. I think it’s dangerous. Here are a few of my experiments and my analysis of my concerns. (1/9)
—Michael Black (@Michael_J_Black) November 17, 2022
The problem that the creators of WhatsApp and Instagram have
Starting Tuesday, visitors to Galactica’s website could enter prompts to generate documents such as book reviews, Wikipedia articles, lecture notes and answers to questions, based on examples provided by the website.
While some people found the demo promising and useful, others soon discovered that anyone could write racist or potentially offensive remarks, generating authoritative-seeming content on those topics with equal ease.
For example, someone used it to create a wiki entry about a fictional research paper titled “The benefits of eating crushed glass“.
Even when the Galactica result was not offensive to social norms, the model could destroy well-understood scientific facts, giving inaccuracies such as incorrect dates or animal namess, which required a deep knowledge of the subject to be detected.
As a result, Meta pulled the demo from Galactica on Thursday. Later, Meta’s chief AI scientist, Yann LeCun, tweeted: “The Galactica demo is offline for now. It is no longer possible to have fun by misusing it. Glad?” Dwarves grow on Facebook.