Google’s Gemini chatbot is facing concerns over its accuracy on sensitive topics due to changes in the guidelines for its human raters, also known as “prompt engineers.” Until recently, contractors working with GlobalLogic could skip certain prompts if they lacked domain expertise. However, a new internal guideline has been implemented, requiring them to evaluate all prompts, even those outside their area of knowledge.
This change has led to worries that Gemini may produce inaccurate information on complex topics like healthcare. Contractors are now instructed to “rate the parts of the prompt you understand” and include notes when they lack expertise. The updated guidelines also limit skipping prompts to situations where contractors are “completely missing information” or if the content is harmful.
Google has not commented directly on the changes, but a spokesperson said that raters perform various tasks beyond just reviewing answers for content and that their feedback helps improve Gemini’s accuracy.
Source: https://techcrunch.com/2024/12/18/exclusive-googles-gemini-is-forcing-contractors-to-rate-ai-responses-outside-their-expertise