counter easy hit

Google puts Nvidia on high alert as it showcases Trillium, its rival AI chip, while promising to bring H200 Tensor Core GPUs within days

Google puts Nvidia on high alert as it showcases Trillium, its rival AI chip, while promising to bring H200 Tensor Core GPUs within days
12
Trillium TPU

(Image credit: Google)

  • Trillium offers 4x training boost, 3x inference improvement over TPU v5e
  • Enhanced HBM and ICI bandwidth for LLM support
  • Scales up to 256 chips per pod, ideal for extensive AI tasks

Google Cloud has unleashed its latest TPU, Trillium, the sixth-generation model in its custom AI chip lineup, designed to power advanced AI workloads.

First announced back in May 2024, Trillium is engineered to handle large-scale training, tuning, and inferencing with improved performance and cost efficiency.

The release forms part of Google Cloud’s AI Hypercomputer infrastructure, which integrates TPUs, GPUs, and CPUs alongside open software to meet the increasing demands of generative AI.

A3 Ultra VMs arriving soon

Trillium promises significant improvements over its predecessor, TPU v5e, with over a 4x boost in training performance and up to a 3x increase in inference throughput. Trillium delivers twice the HBM capacity and doubled Interchip Interconnect (ICI) bandwidth, making it particularly suited to large language models like Gemma 2 and Llama, as well as compute-heavy inference applications, including diffusion models such as Stable Diffusion XL.

Google is keen to stress Trillium’s focus on energy efficiency as well, with a claimed 67% increase compared to previous generations.

Google says its new TPU has demonstrated substantially improved performance in benchmark testing, delivering a 4x increase in training speeds for models such as Gemma 2-27b and Llama2-70B. For inference tasks, Trillium achieved 3x greater throughput than TPU v5e, particularly excelling in models that demand extensive computational resources.

Scaling is another strength of Trillium, according to Google. The TPU can link up to 256 chips in a single, high-bandwidth pod, expandable to thousands of chips within Google’s Jupiter data center network, providing near-linear scaling for extensive AI training tasks. With Multislice software, Trillium maintains consistent performance across hundreds of pods.

Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!

Tied in with the arrival of Trillium, Google also announced the A3 Ultra VMs featuring Nvidia H200 Tensor Core GPUs. Scheduled for preview this month they will offer Google Cloud customers a high-performance GPU option within the tech giant’s AI infrastructure.

Trillium TPU, built to power the future of AI – YouTube Trillium TPU, built to power the future of AI - YouTube

Watch On

You might also like

  • Google Cloud: No-one can deliver business AI value like us
  • Google’s TPU v5p chip is faster and has more memory and bandwidth
  • Intel and Google Cloud team up to launch super-secure VMs

Wayne Williams is a freelancer writing news for TechRadar Pro. He has been writing about computers, technology, and the web for 30 years. In that time he wrote for most of the UK’s PC magazines, and launched, edited and published a number of them too.

Leave A Reply

Your email address will not be published.