Nvidia’s newest chips have significantly improved the efficiency of training large artificial intelligence systems. According to data released by MLCommons, the number of chips required to train large language models has decreased dramatically. Nvidia’s Blackwell chips are more than twice as fast as their previous generation Hopper chips on a per-chip basis. The data shows that 2,496 Blackwell chips completed a training test in just 27 minutes, while it took over three times as many previous-generation chips to achieve a faster time.