Meta’s AI research labs have created a new state-of-the-art chatbot and are letting members of the public talk to the system in order to collect feedback on its capabilities.
The bot is called BlenderBot 3 and can be accessed on the web. (Though, right now, it seems only residents in the US can do so.) BlenderBot 3 is able to engage in general chitchat, says Meta, but also answer the sort of queries you might ask a digital assistant, “from talking about healthy food recipes to finding child-friendly amenities in the city.”
The bot is a prototype and built on Meta’s previous work with what are known as large language models or LLMS — powerful but flawed text-generation software of which OpenAI’s GPT-3 is the most widely known example. Like all LLMs, BlenderBot is initially trained on vast datasets of text, which it mines for statistical patterns in order to generate language. Such systems have proved to be extremely flexible and have been put to a range of uses, from generating code for programmers to helping authors write their next bestseller. However, these models also have serious flaws: they regurgitate biases in their training data and often invent answers to users’ questions (a big problem if they’re going to be useful as digital assistants).
This latter issue is something Meta specifically wants to test with BlenderBot. A big feature of the chatbot is that it’s capable of searching the internet in order to talk about specific topics. Even more importantly, users can then click on its responses to see where it got its information from. BlenderBot 3, in other words, can cite its sources.
By releasing the chatbot to the general public, Meta wants to collect feedback on the various problems facing large language models. Users who chat with BlenderBot will be able to flag any suspect responses from the system, and Meta says it’s worked hard to “minimize the bots’ use of vulgar language, slurs, and culturally insensitive comments.” Users will have to opt in to have their data collected, and if so, their conversations and feedback will be stored and later published by Meta to be used by the general AI research community.
“We are committed to publicly releasing all the data we collect in the demo in the hopes that we can improve conversational AI,” Kurt Shuster, a research engineer at Meta who helped create BlenderBot 3.
Releasing prototype AI chatbots to the public has, historically, been a risky move for tech companies. In 2016, Microsoft released a chatbot named Tay on Twitter that learned from its interactions with the public. Somewhat predictably, Twitter’s users soon coached Tay into regurgitating a range of racist, antisemitic, and misogynistic statements. In response, Microsoft pulled the bot offline less than 24 hours later.
Meta says the world of AI has changed a lot since Tay’s malfunction and that BlenderBot has all sorts of safety rails that should stop Meta from repeating Microsoft’s mistakes.
Crucially, says Mary Williamson, a research engineering manager at Facebook AI Research (FAIR), while Tay was designed to learn in real time from user interactions, BlenderBot is a static model. That means it’s capable of remembering what users say within a conversation (and will even retain this information via browser cookies if a user exits the program and returns later) but this data will only be used to improve the system further down the line.
“It’s just my personal opinion, but that [Tay] episode is relatively unfortunate, because it created this chatbot winter where every institution was afraid to put out public chatbots for research,” Williamson tells The Verge.
Williamson says that most chatbots in use today are narrow and task-oriented. Think of customer service bots, for example, which often just present users with a preprogrammed dialogue tree, narrowing down their query before handing them off to a human agent who can actually get the job done. The real prize is building a system that can conduct a conversation as free-ranging and natural as a human’s, and Meta says the only way to achieve this is to let bots have free-ranging and natural conversations.
“This lack of tolerance for bots saying unhelpful things, in the broad sense of it, is unfortunate,” says Williamson. “And what we’re trying to do is release this very responsibly and push the research forward.”
In addition to putting BlenderBot 3 on the web, Meta is also publishing the underlying code, training dataset, and smaller model variants. Researchers can request access to the largest model, which has 175 billion parameters, through a form here.