Facebook AI Research Labs (FAIR), Meta’s artificial intelligence research division, have created a new ‘chatbot’ (an AI intended to converse with humans, wow). It is able to participate in general conversationsbut also to answer, according to its creators, the kind of questions we might want to ask a digital assistant —such as, for example, “talk about healthy food recipes”—.
But perhaps the most remarkable thing about this bot, baptized by Meta engineers as BlenderBot3, is that anyone can chat with him via a website (for now, just anyone residing in the US, but nothing a VPN can’t take care of)… not to mention that their code and training data is available in a public repository, along with the simpler variants of the model (the complete one, with 175 million parameters, is available in this form).
Throwing this chatbot straight into the internet arena, Meta does not want BlenderBot3 to learn to converse from the hand of humans (like other previous experiments from other companies, which failed terribly by underestimating the ‘troll’ attitude of humans… and when they hit the headlines it was to warn of their conversion into ‘racist’ AIs), but rather intends encourage users to ‘snitch’ on inappropriate responses of BlenderBot3, which for now is just a prototype.
Thus, the interlocutors of the Meta bot will be able to flag any response it provides that users find ‘suspicious’ or ‘inappropriate’‘, although they must also agree to have their data collected, so that their conversations can be stored, processed and published by researchers.
“We are committed to publishing all the data we collect in the demo in hopes that we can improve conversational AI,”
The point of all this? “Minimize the use of vulgar language, insults and culturally insensitive comments by bots.” That is, to avoid repeating the mistake Microsoft made with Tay in 2016.
MEGA GUIDE IMPROVES FACEBOOK SECURITY and PRIVACY
Getting out of the ‘conversational AI winter’
In fact, according to Facebook AI Engineering Manager Mary Williamson, while Tay was explicitly designed to learn in real time from user interactions, BlenderBot3 is a static model (it is able to remember what users say within of a conversation, but this data is not integrated into the model automatically and on the fly).
According to Williamson, Tay’s case was “unfortunate, because it led to a ‘chatbot winter’ where all institutions they started to be afraid to launch public chatbots for research” and only progress was made in the creation of non-generalist chatbots, but task-oriented. She argues that, however, the real breakthrough would lie in building an AI system capable of coping with conversation as natural as human.
Via | TheVerge