Elon Musk’s AI chatbot Grok 3 has sparked controversy after its responses changed drastically within days, raising questions about AI consistency and potential bias. The chatbot, launched in early 2025, was touted as a major improvement over its predecessor, but its erratic behavior has left many wondering if it truly pulls from real-time data.
When asked about former US President Donald Trump, Grok 3 responded by labeling him a “former president” due to his role in spreading misinformation and inciting the January 6th Capitol riot. However, when asked about Trump’s status after winning the 2024 election, the chatbot failed to acknowledge it, instead citing outdated information.
Similar inconsistencies were found when asking about other high-profile figures, including Vladimir Putin, Xi Jinping, and Tucker Carlson. The chatbot’s responses suggest a lack of understanding or a deliberate attempt to shape public discourse.
Experts argue that AI models are designed to pull from data, but fluctuations like these indicate some level of manual intervention or algorithmic reweighting. Musk has claimed that Grok 3 pulls real-time information from X, but the chatbot’s erratic behavior calls into question this assertion.
The controversy surrounding Grok 3 highlights the growing debate over AI’s role in shaping public discourse. Should AI models be neutral, or should they actively weigh in on societal issues? As the intersection of AI and politics becomes increasingly complicated, it remains to be seen whether Musk’s hands-off approach will lead to more transparency or further controversy.
The Economic Times is tracking this developing story and will provide updates as more information becomes available.
Source: https://economictimes.indiatimes.com/news/international/global-trends/elon-musks-ai-grok-3-ranks-him-among-americas-most-harmfulwho-else-made-the-list/articleshow/118498727.cms?from=mdr