Google Bard, which has been infused with a slew of features recently to stack it up against OpenAI’s ChatGPT, may not be as trustworthy after all. According to Google’s UK executive Debbie Weinstein, the AI chatbot from Google may struggle while offering trustworthy information.
In an interview with the BBC, Weinstein, the managing director of Google UK, was heard saying that Google is key to cross-checking Bard’s answers. “We know people count on Google for accurate information and we’re encouraging people to actually use Google as the search engine,” she said, adding that the chatbot was not essentially the place one would go to look for specific information.
At present, Google Bard’s homepage states its limitations and occasional imperfections saying that it may not always yield correct answers. However, it also does not suggest that the users should cross-check all their responses through a traditional search engine.
Just like any other AI chatbot, Bard is prone to hallucination, a term used to describe the model’s tendency to confidently respond with inaccurate information. Even OpenAI’s mightiest GPT-4 large language model is not immune to this problem.
When it was launched in February, Bard offered incorrect responses during a demonstration. This led to the company’s share price tanking. This only showed that fabricated and inaccurate answers are a common problem across AI systems.
Google Bard, launched in March this year, is a generative AI chatbot based on LaMDA large language models from Google. It was later shifted to PaLM LLM. Recently, Google equipped Bard with over 40 languages (text-to-voice) and integrated Google Lens which allows users to upload images along with prompts. Bard has been expanded to more geographic territories and now comes with customised responses adding to the ease of users.