Artificial intelligence chatbots, developed by major tech companies, have been found to struggle with accurately summarising news stories. A recent study conducted by the BBC tested four popular AI chatbots – OpenAI’s ChatGPT, Microsoft’s Copilot, Google’s Gemini, and Perplexity AI – on 100 news stories.
The results showed that 51% of the AI answers contained significant inaccuracies or distortions. Furthermore, 19% of AI answers based on BBC content introduced factual errors, including incorrect dates, numbers, and statements.
Examples of inaccuracies found in the study included:
– ChatGPT stating Rishi Sunak and Nicola Sturgeon were still in office after leaving
– Gemini incorrectly saying the NHS recommended vaping as an aid to quit smoking
– Perplexity misquoting BBC News about the Middle East
The study’s findings have raised concerns about the potential impact of AI on journalism and the spread of misinformation. The BBC is now seeking to work with AI tech providers to find solutions to these issues.
In a statement, BBC CEO Deborah Turness called for tech companies to “pull back” their AI news summaries and urged publishers to regain control over how their content is used by AI assistants. The findings highlight the need for greater regulation and oversight of AI in journalism to ensure accuracy and factuality.
Source: https://www.bbc.com/news/articles/c0m17d8827ko