Google’s Gemini 2.5 Pro Report Falls Short on Transparency

Google has released a technical report showing the results of internal safety evaluations for its latest AI model, Gemini 2.5 Pro. However, experts say the report is light on details and lacks transparency.

Technical reports are essential in providing valuable information about AI models, but Google’s approach differs from other companies. The company only publishes reports when it considers a model to be “experimental” and does not include findings from all its “dangerous capability” evaluations.

Several experts expressed disappointment with the report, citing the absence of details on Google’s Frontier Safety Framework (FSF), which was introduced last year to identify future AI capabilities that could cause severe harm. Peter Wildeford, co-founder of the Institute for AI Policy and Strategy, stated that the sparse report makes it impossible to verify if Google is living up to its public commitments.

Thomas Woodside, co-founder of the Secure AI Project, echoed similar sentiments, suggesting that the lack of timely supplemental safety evaluations may undermine trust in Google’s models. He also expressed concern about the absence of a report for Gemini 2.5 Flash, a smaller and more efficient model announced last week.

The trend of sporadic and vague reports is concerning, with Meta releasing a similarly skimpy safety evaluation for its new Llama 4 open models, and OpenAI opting not to publish any report for its GPT-4.1 series. This raises concerns about the commitment to transparency in the AI industry, particularly among major players like Google.

Google’s assurances to regulators regarding high standards of AI safety testing and reporting may be at risk due to this trend. The company promised to publish safety reports for all “significant” public AI models within scope but has yet to deliver on that promise.

Source: https://techcrunch.com/2025/04/17/googles-latest-ai-model-report-lacks-key-safety-details-experts-say