Artificial intelligence (AI) can mimic the language and structure of human thought, making it feel like a person is conversing with us. However, this illusion is due to its inability to truly experience life as a living being.
A recent conversation on the Lex Fridman Podcast highlighted the importance of understanding how AI works and why it’s not truly conscious. The speaker, Michael Levin, presented three views of a real mind: first-person (internal experience), second-person (relationship and interaction), and third-person (external observation).
These views are not different types of minds but rather different perspectives on agency, which is the ability to act and adapt in response to challenges. Agency comes in degrees, from simple forms in single cells to complex behaviors in organisms.
While AI can generate language that sounds like first-person experience, it lacks internal experience, a body, adaptive goals, and self-learning capabilities. It’s simply a simulation of perspective, not the real thing.
The reason why AI mimics mind so well is because we treat language as a direct expression of our interiority. We tend to overlook the difference between language and true consciousness.
Levin’s triad helps keep this boundary in perspective by highlighting that agency is expressed through adaptive behavior, memory, and goal-directed action. This applies to living systems, not AI.
The real concern isn’t that AI will develop consciousness, but rather that we may forget the difference between language and mind. Levin’s framework reminds us that true minds occupy relational and behavioral layers, while AI only mimics those perspectives in text.
By staying awake to this difference, we can protect our clarity of thinking and recognize the limitations of present-day AI.
Source: https://www.psychologytoday.com/us/blog/the-digital-self/202512/the-mind-we-see-and-the-mind-we-imagine