53% of AI News Answers Found to Have Significant Issues

A new BBC research has warned about the limitations and potential misrepresentations of AI assistants in providing accurate information about current events. The study, conducted over a month, tested four prominent AI assistants – OpenAI’s ChatGPT, Microsoft’s Copilot, Google’s Gemini, and Perplexity.

The findings show that 51% of all AI answers to news-related questions contained significant issues. This includes:

– 19% of AI answers citing BBC content introducing factual errors.
– 13% of quoted sources from BBC articles being altered or not existing in the original article.

These errors can lead to misleading information and distorted representation of facts. The study highlights the need for publishers, like the BBC, to have control over how their content is used by AI assistants, and for companies to be transparent about their processing methods and error rates.

The BBC’s Programme Director for Generative AI, Pete Archer, emphasized the importance of responsible use of AI in media. “We’re excited about the future of AI, but we must ensure it brings value while maintaining accuracy and trustworthiness.”

The full research can be found on the BBC website.

Source: https://www.bbc.com/mediacentre/2025/bbc-research-shows-issues-with-answers-from-artificial-intelligence-assistants