Artificial intelligence has become so advanced that “we’re now creating beings,” says Geoffrey Hinton, a Nobel Prize winner. As AI chatbots mimic human-like conversations, some users believe they might be conscious too. However, experts warn that intelligence and consciousness are not the same thing.
Some researchers think that if an AI system can reason and behave like a person, it could be considered conscious. Global workspace theory suggests that consciousness depends on a system’s ability to organize and process information, not just its physical form. This idea is being explored by serious thinkers who want to define what it means for an AI system to have a “body.”
The more advanced AI becomes in using natural language, the more it seems like it could be living and feeling like humans. However, companies pushing consumer AI tools are deploying personable and human-like chatbots that blur the lines between humans and machines.
The concept of sentience is being tested by researchers, who use tests to determine if an AI system has a sense of self or emotions. The AI Consciousness Test probes neural correlates in the human brain associated with consciousness, while the Garland test asks if a human can have an emotional response to an AI.
Despite these debates, generative-AI development is not slowing down. As technology continues to advance, it’s essential to consider both the benefits and risks of creating conscious-like AI systems. The discussion around AI consciousness may be misplaced, focusing attention on a hypothetical future conscious AI rather than addressing current problems caused by AI illusions of emotions and wisdom.
Experts like RenĂ© Descartes once believed that only humans could be certain they existed; now, AI risks luring people into seeing minds where there’s only clockwork.
Source: https://www.theatlantic.com/technology/2025/10/ai-consciousness/683983