Google is set to launch its own rival to Open AI's ChatGPT, a language model that can answer your questions and queries. The artificial intelligence (AI) chatbot named Bard will roll out to the public in the coming weeks.
Bard, CEO of Google Sundar Pichai explained in an update, will initially be a scaled-down version of their Language Model for Dialogue Applications (LaMDA).
"This much smaller model requires significantly less computing power, enabling us to scale to more users, allowing for more feedback," Pichai explained in a blog post. "We’ll combine external feedback with our own internal testing to make sure Bard’s responses meet a high bar for quality, safety and groundedness in real-world information".
The AI will begin to feature in Google search and will attempt to answer more complex questions where there may be multiple viewpoints on the topic. They claim that Bard will "distill complex information and multiple perspectives into easy-to-digest formats, so you can quickly understand the big picture". Part of this will involve seeking out perspectives from around the web.
Google is late to the market with its AI language model release, beaten to the post by Open AI with their free-to-use Chat GPT. The creator of Gmail recently warned that AI could totally disrupt Google within a few years, by eliminating the need for a search results page, which is where Google makes the bulk of its ad revenue. The firm say that the new feature, which will sit at the top of the search results page, will roll out "soon".
LaMDA, on which Bard is based, made headlines last June when Google Engineer Blake Lemoine was placed on leave when he became convinced that the chatbot was sentient. After several conversations with the bot, Lemoine was convinced the bot was conscious, had emotions, and was afraid of being turned off. The engineer claimed that the bot asked him to hire a lawyer to act on its behalf.
However, AI experts are certain that the model is not sentient, with one AI researcher Gary Marcus explaining that it is not remotely intelligent and is more like a ”spreadsheet for words”.
"All they do is match patterns, draw from massive statistical databases of human language," Marcus wrote on his substack. "The patterns might be cool, but language these systems utter doesn’t actually mean anything at all. And it sure as hell doesn’t mean that these systems are sentient."