Senior Vice President at Google, Prabhakar Raghavan, has warned the public about a little-known problem with artificial intelligence (AI) chatbots: they "hallucinate".
Google is set to launch its own rival to OpenAI's ChatGPT, a language model that can answer your questions and queries. Named Bard, the chatbot will roll out to the public in the coming weeks according to Google CEO Sundar Pichai.
Ahead of the launch, Google demonstrated the powers of Bard in a promo video. Unfortunately, people noticed that the chatbot – a scaled-down version of their Language Model for Dialogue Applications (LaMDA) which convinced one engineer it was sentient – came up with incorrect statements about the JWST.
Bard claimed that the JWST was the first telescope to take pictures of planets outside of the solar system, even though the first image was produced by the Very Large Telescope in 2004, almost two decades before JWST began its work.
As well as being a wallet-ache for investors (the error wiped over $100 billion off shares), the problem is a headache for Google and other AI chatbot creators.
Speaking to German newspaper Welt am Sonntag, Raghavan warned that users may be delivered complete nonsense by chatbots, despite answers seeming coherent.
"This type of artificial intelligence we're talking about can sometimes lead to something we call hallucination," Raghavan told Welt Am Sonntag. "This is then expressed in such a way that a machine delivers a convincing but completely fictitious answer."
For instance, when asked what is the record for crossing the English Channel on foot, ChatGPT claims that several different people have set that record (taking between four and 12 hours).
According to co-founder of Got It AI, Peter Relan, ChatGPT hallucinates about 15 or 20 percent of the time.
“The key here is to find out when it is," he told Datanami, "and make sure that you have an alternative answer or a response you deliver to the user, versus its hallucination.”
Despite teasing that the launch of Bard will happen "soon" last week, no further information has been released. Raghavan told Welt Am Sontag that they want to test the bot on a large scale so that they can be reassured that it is giving out factual information.
"Of course we feel the urgency, but we also feel the great responsibility," Raghavan said. "We definitely don't want to mislead the public."