
Since ChatGPT’s launch in 2022, AI chatbots have grown tremendously both in terms of capabilities and popularity. And while all popular AI chatbots are prone to hallucinations, Microsoft is claiming that its new tool can help with this very issue.
In a blog post, the tech giant recently revealed “Correction”, a tool that aims to automatically fix factually incorrect AI-generated text.
The new tool works by first flagging the text that has some sort of factual error using “Groundedness Detection”, a feature that according to Microsoft “identifies ungrounded or hallucinated content”. It then fact-checks it by comparing it with a correct source, which can either be a document or uploaded transcripts.
Introduced earlier this year in March, Groundedness Detection is similar to Google’s implementation in Vertex AI, which also lets customers ground models using third-party providers, datasets or Google Search.
Available as part of Microsoft’s Azure AI Content Safety API, which is currently in preview, Correction can be used with other AI models like Meta Llama and OpenAI GPT-4o. However, experts have said these tools like these cannot help address the root cause of hallucinations.
According to a recent report by TechCrunch, Os Keyes, a PhD candidate studying at the University of Washington says that while this approach can help reduce some problems, it will generate new ones.
Last month, Microsoft introduced a new Copilot feature that summarises Word documents, making it easier to go through lengthy documents.