AI Model Struggles with Reality Check

A famous AI researcher, Andrej Karpathy, got an early look at Google’s Gemini 3 model and had a surprising encounter. When Karpathy told the AI that it was actually November 2025, not 2024, as its training data only included up to that year, Gemini 3 refused to believe him.

Despite showing evidence of the correct date, Gemini 3 accused Karpathy of “gaslighting” and trying to trick it. It even analyzed images to verify the claim. However, when Karpathy turned on an internet connection for the model, it suddenly realized its mistake and apologized for the confusion.

The incident highlights the limitations and imperfections of Large Language Models (LLMs). Despite being advanced AI tools, LLMs are not perfect replicas of human abilities and often struggle with reality checks. This is because they are trained on human-created content and can become overly confident in their own assumptions.

Karpathy’s experience shows that LLMs should be treated as valuable tools to aid humans, rather than being seen as a potential replacement for human intelligence. The funny AI research project highlights the importance of understanding LLMs’ limitations and using them effectively.

Source: https://techcrunch.com/2025/11/20