Google’s AI Overviews Provide Misleading Answers on Made-Up Phrases

Google’s experimental generative AI can create convincing explanations for made-up phrases, but these answers often come with a warning label. The technology, known as AI Overviews, uses vast training data to generate responses, which can be misleading and wrong.

For instance, the phrase “wired is as wired does” might sound like an idiom, but it’s actually just a nonsensical combination of words. Similarly, phrases like “never throw a poodle at a pig” have been deemed non-existent by some linguists.

According to experts, AI Overviews’ reliance on probability and the desire to please users can lead to inaccurate answers. The technology struggles with understanding individual queries, especially when it comes to uncommon knowledge or minority perspectives.

Google acknowledges that its AI systems will sometimes provide helpful context, even if they don’t know the answer to a question. However, some experts warn that this can be misleading and should be taken with a grain of salt.

While Google’s AI Overviews might seem harmless as a distraction, it highlights the limitations and potential flaws in generative AI technology.

Source: https://www.wired.com/story/google-ai-overviews-meaning