The hype surrounding artificial intelligence (AI) has taken a hit after the release of GPT-5, which failed to impress early reviewers. The model’s performance was touted as significant improvements over its predecessors, but critics argue that these advancements are narrow and targeted, rather than broadening AI capabilities.
The industry’s scaling law, first proposed by researchers Jared Kaplan and Dario Amodei in 2020, predicted that larger language models would get better with size and training data. However, recent releases like GPT-5 have shown that this law may not hold true.
Instead of building massive models, companies are now focusing on post-training improvements, which involve refining existing models through machine learning techniques. This approach aims to make AI tools more efficient and effective without the need for a massive scaling up.
However, experts argue that post-training improvements may not be enough to achieve significant breakthroughs in AI capabilities. “Post-training soups up the car,” said Ilya Sutskever, a former OpenAI engineer. “But no amount of tweaking will turn it into a Ferrari.”
The industry’s obsession with superintelligence and dramatic advancements may have led to unrealistic expectations. Moderate views now suggest that AI tools will make steady but gradual advances in the next few years.
While some experts predict that AI will continue to play a significant role in various fields, such as programming and academia, others warn of potential disruptions in certain professions like voice acting and social-media copywriting.
As companies continue to invest heavily in AI, it’s essential to have a more nuanced understanding of its capabilities and limitations. Experts caution against letting the hype surrounding AI cloud our judgment, emphasizing the need for effective A.I. regulations and digital ethics.
Source: https://www.newyorker.com/culture/open-questions/what-if-ai-doesnt-get-much-better-than-this