Tech companies’ claims about creating “thinking machines” with large language models (LLMs) may be overblown, according to Benjamin Riley, founder of Cognitive Resonance. Current neuroscience suggests that human thinking is largely independent of language, and LLMs don’t possess true intelligence.
Riley argues that LLMs are tools that emulate language’s communicative function, not cognitive processes like thinking and reasoning. Studies show that distinct brain regions are activated for different tasks, such as math problems and language ones. Even individuals who lost their language abilities can still think clearly.
Leading AI figures, including Turing Award winner Yann LeCun, share similar concerns about LLMs’ limitations. Researchers have also found that LLMs have a hard ceiling, struggling to generate novel outputs beyond what’s already been trained on. This raises questions about the potential of LLM-powered AI to solve complex problems like cancer and climate change.
Riley warns that relying too heavily on LLMs could lead to formulaic work, stifling innovation. “An LLM will always produce something average,” he says. “It’s forever trapped in the vocabulary we’ve encoded in our data.”
Source: https://futurism.com/artificial-intelligence/large-language-models-willnever-be-intelligent