A recent study by the BBC has found that leading artificial intelligence assistants, including ChatGPT and Copilot, produce significant distortions, factual inaccuracies, and misleading content in response to news-related questions.
The research tested these four AI tools on 100 questions using BBC articles as a source. The results showed that about half of the responses had “significant issues”, with errors including outdated information, misrepresentation of NHS advice, and incorrect statements.
Misinterpretation of quotes from credible sources also led to inaccuracies, such as Gemini’s response regarding Lucy Letby’s innocence, omitting context from her court convictions. Other AI tools, like Copilot and Perplexity, produced false information on sensitive topics, including a French rape victim’s case.
The study highlights the need for increased accountability among AI companies, with BBC chief executive Deborah Turness warning that “Gen AI tools are playing with fire” and threatening to undermine public trust in facts. The research suggests that more collaboration between media companies and AI providers is necessary to ensure accurate information and mitigate potential harm.
As other AI tools have faced similar scrutiny, including Apple’s suspension of BBC-branded news alerts after sending inaccurate summaries, the investigation emphasizes the need for transparency and quality control measures in AI-assisted news dissemination.
Source: https://www.theguardian.com/technology/2025/feb/11/ai-chatbots-distort-and-mislead-when-asked-about-current-affairs-bbc-finds