Nvidia's Latest Chips Slash AI Training Time

June 6, 2025
Automate Conversational Experiences with AI
Discover the power of a platform that gives you the control and flexibility to deliver valuable customer experiences at scale.
Schedule a demo

Nvidia's newest chips have significantly improved the efficiency of training large artificial intelligence systems. According to data released by MLCommons, the number of chips required to train large language models has decreased dramatically. Nvidia's Blackwell chips are more than twice as fast as their previous generation Hopper chips on a per-chip basis. The data shows that 2,496 Blackwell chips completed a training test in just 27 minutes, while it took over three times as many previous-generation chips to achieve a faster time.

Why Inbenta

With our Composite AI solution, your Virtual Agent continuously learns from each interaction, achieving over 99% accuracy.
Learn more
Gartners Peer Insights Logo
Based on 20+ peer reviews
Service & Support

Related AI This Week posts

Mistral AI Secures $830 Million to Build Nvidia-Powered Data Centers Across Europe
Read more
Anthropic Accidentally Leaks Claude Code Source Code in Major Security Blunder
Read more
OpenAI Closes Record $122 Billion Funding Round at $852 Billion Valuation
Read more