One of the most active businesses in Artificial Intelligence research is Meta, the freshly renamed organization that runs Facebook. Start a chat with Meta’s new AI right now to experience the results of its labors. The record-breaking number of AI parameters, 175 billion, is set by BlenderBot 3. Because of this, BlenderBot 3 is now able to converse about nearly anything, but according to Meta, you shouldn’t rely on it to be factually correct. Apparently capable of “hallucination,” this robot.
The method that Meta developers used to create the bot inspired the name of the bot. The team’s extensive research into conversational AI has led them to the conclusion that an AI that “blends” several conversational skills outperforms one that only learns one at a time. BlenderBot 3 is thought to be a significant advancement toward an AI that can interact with people and grasp context. That does not imply that it is always telling the truth.
Before loading the demo (now available only in the US), Meta informs users that they shouldn’t take BlenderBot 3 seriously. Although it is intended to combine knowledge from the internet with information from its own memory, it may nevertheless say things that are offensive or incorrect. This reminds me of Microsoft’s Tay AI’s disastrous 2016 launch. Tay transformed into a Nazi propagandist just a few days after starting to engage with people on Twitter.
BlenderBot 3 features protections that should cut down on offense by approximately 90%, but Meta admits the machine has a tendency to hallucinate. In essence, it starts to believe things that are wholly false and can even lose sight of the fact that it is a bot. During our testing, BlenderBot 3 quickly started to believe that it was a person. Even though it was just 3:30 PM, it insisted it was 5 PM, told tales about its mother, and even claimed to be from Texas. Indeed, hallucinations.
Despite its hallucinations, Meta’s new AI appears to be quite “real.” It is more talkative and outgoing than the majority of people you have likely encountered. In the meantime, Google’s LaMDA AI, which was unveiled in 2021, aims to make chatbots more factual. LaMDA’s less complex 137 billion parameter model is also used to train it to comprehend dialog. Despite this, a Google engineer recently declared the AI to be sentient. That assertion has been debunked by experts, but if machines are capable of hallucination, how long before they dream of electric sheep?