The demand for artificial intelligence (AI) chips has skyrocketed, with tech giants spending billions on hardware to train advanced AI models. Nvidia’s data center segment generated $30 billion in revenue during the third quarter alone, a 10-fold increase from two years ago. However, experts warn that these advancements come at a significant cost.
The training of large language models (LLMs) like OpenAI’s GPT-4 has become increasingly expensive. GPT-4 is estimated to have cost around $100 million to train, while its predecessor GPT-3 may have only cost a few million dollars. Anthropic CEO Dario Amodei expects the next generation of AI models to cost around $1 billion to produce.
The reason for this rapid increase in costs lies in the fundamental limitations of LLMs. These models work by predicting the next token in the output and can produce high-quality text, generate convincing images, and even appear to do advanced reasoning. However, with more data and computational horsepower, the rate of improvement in AI models is slowing.
Experts like Marc Andreessen believe that AI companies are hitting a ceiling in capabilities, regardless of the amount of data or computing power thrown at them. This raises questions about whether training a $1 billion or $10 billion AI model makes financial sense. If AI models have largely topped out in terms of capabilities, the overinvestment by tech giants may never pay off in terms of revenue or profit.
The consequences of this could be brutal for companies like Nvidia as demand for AI chips dries up. The industry is now facing a dilemma: whether to continue investing heavily in AI research and development or reassess their strategy to ensure long-term sustainability.
Source: https://finance.yahoo.com/news/prediction-massive-risk-could-derail-134500812.html