Microsoft has unveiled a new service, Correction, designed to tackle one of AI’s biggest flaws: factual inaccuracies. Correction, part of Microsoft’s Azure AI Content Safety API, flags AI-generated text errors and revises them by comparing the content with verified sources, such as transcripts. While the aim is to enhance the reliability of AI responses, experts are raising doubts.
AI models like GPT and Meta’s Llama don’t actually “know” facts; they predict likely responses based on patterns in data. This leads to hallucinations, where the AI fabricates details.
Microsoft’s solution involves a classifier model that detects hallucinations and a second model that cross-checks and corrects them. While this may reduce errors, critics argue it doesn’t address the core issue: AI models are inherently unreliable because of their statistical nature.
Some experts, like Mike Cook from Queen Mary University, warn that users may develop a false sense of security, thinking AI-generated content is more accurate than it truly is.
Additionally, Microsoft’s pricing structure for the service, with a cap of 5,000 free text records per month, raises concerns about the feature’s accessibility.
In a rapidly evolving AI landscape, businesses are increasingly wary of inaccuracies and hallucinations, particularly as they adopt tools like Microsoft 365 Copilot. Microsoft’s new tool is a step toward solving the problem, but many believe the underlying issues still need more attention.