Google’s AI-powered chatbot Gemini has been accused of having less accurate responses due to changes in its internal evaluation policies. According to a recent report, external contractors working under Google’s guidelines are being asked to rate Gemini’s responses on topics they are not qualified to answer. This is causing concerns about the quality and precision of the chatbot’s answers.
The process of training AI-powered chatbots like Gemini is complex and requires specific parameters for the data used in its knowledge base. Hundreds of humans evaluate the quality of responses, but Google has allegedly become more lax with its policies, allowing contractors to rate answers on topics outside their expertise.
For instance, Google previously advised contractors to skip rating prompts if they didn’t have critical expertise, such as coding or math. However, the new guidelines now require them to rate and note their lack of specialized domain knowledge for certain topics. Contractors can still skip ratings when key information is missing or responses include potentially harmful content.
The issue has raised concerns about Gemini’s response accuracy, especially in areas like health, where precision is crucial. Google has yet to comment on the matter, leaving it uncertain whether the changes affect Gemini’s accuracy.
Source: https://www.androidheadlines.com/2024/12/alleged-changes-in-gemini-evaluation-could-affect-reply-accuracy.html