Nvidia's Latest Chips Slash AI Training Time

June 6, 2025
Automate Conversational Experiences with AI
Discover the power of a platform that gives you the control and flexibility to deliver valuable customer experiences at scale.
Schedule a demo

Nvidia's newest chips have significantly improved the efficiency of training large artificial intelligence systems. According to data released by MLCommons, the number of chips required to train large language models has decreased dramatically. Nvidia's Blackwell chips are more than twice as fast as their previous generation Hopper chips on a per-chip basis. The data shows that 2,496 Blackwell chips completed a training test in just 27 minutes, while it took over three times as many previous-generation chips to achieve a faster time.

Read more

Why Inbenta

With our Composite AI solution, your Virtual Agent continuously learns from each interaction, achieving over 99% accuracy.
Learn more
Gartners Peer Insights Logo
Based on 20+ peer reviews
Service & Support

Related AI This Week posts

OpenAI Readies a $100 ChatGPT Pro Lite Plan
Read more
Taalas Unveils Hardcore AI Chip Aimed at Lightning Inference
Read more
NIH Scales Up AI Work as Teams Get Leaner
Read more