Broadcom, a leading supplier of chips, has unveiled a chip called Jericho3-AI for connecting supercomputers used for AI applications.
Broadcom’s Jericho3-AI fabric will allow network operators to handle the growing demands of AI workloads effectively.
This technology arrives at a time when global spending on AI is rapidly increasing, with IDC forecasts indicating that it will reach $154 billion in 2023 and over $300 billion by 2026.
The Jericho3-AI fabric provides a minimum of 10 percent shorter job completion times than alternative networking solutions for key AI benchmarks, resulting in a multiplicative effect on decreasing the cost of running AI workloads. The Jericho3-AI fabric offers 26 petabits per second of Ethernet bandwidth, four times more than the previous generation, and simultaneously delivers 40 percent lower power per gigabit.
As systems like OpenAI’s ChatGPT and Alphabet’s Bard require massive amounts of data to be trained, this task is divided into thousands of graphics processing units (GPUs) that must communicate at high speeds.
The Jericho3-AI chip is capable of connecting up to 32,000 GPUs, providing a competitive alternative to InfiniBand, a popular supercomputer networking technology dominated by Nvidia.
While Nvidia-Mellanox systems are among the fastest in the world, many companies are reluctant to purchase both their GPUs and networking gear from the same supplier.
Broadcom’s senior VP and GM of the core switching group, Ram Velaga, notes that Ethernet is a widely available option with many competing vendors, while InfiniBand is a single-source, proprietary solution.
Jericho3-AI fabric enables the lowest total cost of ownership while offering the highest performance, thanks to features such as long-reach SerDes, distributed buffering, and advanced telemetry, all provided using industry-standard Ethernet.