Google I/O 2025 Ignores AI Hallucinations Problem

Google’s annual conference, I/O 2025, focused on artificial intelligence, but a major issue – hallucinations in AI models – was noticeably absent from discussions. The term refers to invented facts and inaccuracies in large-language models’ responses. According to the big AI brands’ metrics, hallucinations occur in over 40 percent of cases.

Despite being a significant problem, Google did not provide detailed information on how it addresses hallucination issues. Instead, it mentioned a process where the model checks its own work before delivering an answer, but this was met with skepticism by experts. The lack of transparency and emphasis on truthfulness raises concerns about the reliability of AI chatbots.

Google’s new AI model, Gemini 2.5 Pro, is touted as intelligent, but scores only 52.9 percent in a benchmark test for answering simple questions. This highlights the problem of hallucinations and the need for improved fact-checking capabilities. The company’s attempts to address this issue through novel approaches, such as agentic reinforcement learning, are promising but not yet effective.

The absence of discussion on AI hallucinations during Google I/O 2025 suggests that companies may be overly optimistic about their AI models’ abilities. While it is possible that hallucinations can be solved, the current state of research indicates that this remains a challenging problem. The era of AI Search may lead to an error-filled landscape unless significant improvements are made.

Source: https://mashable.com/article/google-io-2025-hallucinations