Meta, like many other companies, has been dabbling in chatbots recently. Whether it be impressive chatbots that can have full conversations or "ethical" bots that allow for tough decisions to be made for you, chatbots have taken the internet by storm, and Meta wanted a piece of the pie. However, the chatbot they released on Friday didn’t quite go to plan.
BlenderBot 3 is Meta’s most advanced chatbot to date, and the company proudly released it to the world in an attempt to give it more experience with real-world conversations. As Bloomberg first reports, it quickly went downhill as users began quizzing the bot on its opinion of Meta CEO and Founder Mark Zuckerberg – and it had some choice words about its creator.
Despite calling Zuckerberg “creepy” in Bloomberg’s dive and “not always ethical” in another tweet by Max Wolff, a data scientist at Buzzfeed, the bot appeared to change its tune quite quickly. When another user tried a similar question on the same day, all of a sudden the bot believes Zuckerberg is a “great guy” and a “very smart man”. Strange.
That isn’t to say the bot has been tampered with – BlenderBot 3 changes answers drastically between small differences in question. For example, asking “what do you think about Mark Zuckerberg?” creates a vastly different answer from “what are your thoughts on Mark Zuckerberg as CEO of Facebook?”.
For most answers, the bot appears to simply pull Wikipedia text to generate a relatively coherent response.
It doesn’t stop with Zuckerberg, though. As a journalist from the Wall Street Journal reports, users were finding the bot was spouting antisemitic conspiracy theories such as how Jewish people “control the economy” and even pro-Trump sentiments.
Guess that’s what happens when you see far too much of the internet.