OpenAI has unveiled its latest model series, GPT-5.2, trained on NVIDIA’s infrastructure, including the Hopper and GB200 NVL72 systems. This development is a significant milestone in AI research, showcasing NVIDIA’s capabilities in supporting large-scale AI training.
GPT-5.2 marks the latest example of how leading AI builders train and deploy models at scale on NVIDIA’s full-stack AI infrastructure. Pretraining and post-training are essential for making reasoning models smarter, but it requires massive resources. The three scaling laws – pretraining, post-training, and test-time scaling – enable AI models to tackle complex queries.
NVIDIA’s GB200 NVL72 systems delivered 3x faster training performance compared to the Hopper architecture on the largest model tested in MLPerf Training industry benchmarks. This improvement helps developers shorten development cycles and deploy new models more quickly.
GPT-5.2 is part of a broader trend, as most leading large language models were trained on NVIDIA platforms. The company supports AI development across multiple modalities, including speech, image, and video generation, as well as emerging areas like biology and robotics.
Notable examples include Runway’s Gen-4.5, a top-rated video model developed entirely on NVIDIA GPUs, and GWM-1, a state-of-the-art general world model trained on the Blackwell platform. These models demonstrate the power of NVIDIA’s infrastructure in supporting diverse AI workloads.
NVIDIA’s Blackwell platform is widely available from leading cloud service providers and server makers, offering scalable performance for pretraining scaling. The future of AI is being built on NVIDIA, with major cloud service providers and partners already offering instances powered by Blackwell.
Source: https://blogs.nvidia.com/blog/leading-models-nvidia