Nvidia Unveils New Chips for AI Model Deployment and High-Performance Computing

Nvidia announced new chips at its annual GTC conference on Tuesday, with the goal of meeting the growing demand for artificial intelligence (AI) model deployment. The company revealed Blackwell Ultra, a family of chips shipping later this year, and Vera Rubin, its next-generation GPU expected to ship in 2026.

The announcements mark Nvidia’s shift towards an annual release cadence for new chip architectures, previously seen every other year. This change aims to keep pace with the rapidly evolving AI market, which has seen significant growth since the release of OpenAI’s ChatGPT in 2022.

Nvidia’s sales have more than quadrupled since its business was transformed by the AI boom, driven largely by its “big GPUs” being used for advanced AI training. Cloud companies like Microsoft, Google, and Amazon are among the largest spenders on Nvidia chips, which power data centers that develop and deploy AI models.

The new Vera Rubin GPU will be twice as fast as Nvidia’s current Blackwell chips, with improved performance and memory capabilities. It will also support up to 288 gigabytes of fast memory, a key specification for AI developers. In addition, Nvidia has announced updates to its networking parts, software package Dynamo, and new laptops and desktops using its chips.

CEO Jensen Huang emphasized the importance of keeping pace with the rapidly evolving AI landscape, citing recent breakthroughs in “agentic AI” that enable machines to reason about complex problems. The company’s new chips are designed to efficiently handle these advanced models, providing a competitive edge for cloud providers and customers alike.

Source: https://www.cnbc.com/2025/03/18/nvidia-announces-blackwell-ultra-and-vera-rubin-ai-chips-.html