Trillium is Google’s TPU for AI training and inferencing that’s powering its latest suite of AI tools and models. The 6th generation is here.
Google has unveiled its newest Tensor Processing Unit (TPU), the Trillium. This latest addition to the TPU family has remarkably impressive performance and efficiency, making it a powerful tool for AI research and development.
The Trillium is Google’s most powerful TPU yet, delivering a 4.7x increase in peak compute performance per chip compared to the previous generation. This performance is mainly due to the larger size, faster matrix multiply units, and doubled high-bandwidth memory capacity.
In the official announcement blog post, Amin Vahdat writes that the TPU is already powering tools such as autonomous vehicles, robotics, drug discovery, generative AI for businesses, and training Gemini models.
Trillium packs a solid 67% improvement in energy efficiency over its predecessors. This makes it a more sustainable option for running large-scale AI models in a world fighting for data center resources (even Meta and Microsoft are working on their own AI chips).
The Trillium is designed for scalability, allowing it to be used in a variety of configurations. Individual Trillium chips can be combined into pods containing up to 256 chips. Beyond pod-level scalability, Trillium can be scaled to hundreds of pods, connecting tens of thousands of chips in a building-scale supercomputer.
The Trillium is expected to be used in a wide range of AI applications, including natural language processing, image recognition, and machine learning. It will also be used to power Google’s next-generation AI agents.
Long story short, Google is all-in. These TPUs are already being rolled out for enterprises. Only time will tell how it shapes the dynamics that currently favor the chipmaking giant Nvidia.
This is part of the host of AI-related announcements the company made during the I/O Developer’s Conference 2024. Check all the highlights here.