Site icon TelecomLead

High bandwidth memory to grow 60% in 2023: TrendForce

High Bandwidth Memory (HBM) is emerging as the favored solution to overcome the limitations of DDR SDRAM in high-speed computation by addressing the constraints of memory transfer speed.
Meta Quest Pro VR headsetHBM stands out for its remarkable transmission efficiency and plays a crucial role in enabling core computational components to operate at their full potential.

Leading AI server GPUs have set a new industry benchmark by predominantly adopting HBM. The global demand for HBM is expected to grow by nearly 60 percent annually in 2023, reaching a staggering 290 million GB, with an additional 30 percent growth anticipated in 2024, according to TrendForce.

Considering TrendForce’s forecast for 2025, which takes into account five large-scale AIGC products similar to ChatGPT, 25 mid-size AIGC products from Midjourney, and 80 small AIGC products, the estimated minimum computing resources required on a global scale could range from 145,600 to 233,700 Nvidia A100 GPUs.

The advent of technologies such as supercomputers, 8K video streaming, and AR/VR is expected to generate a simultaneous surge in workload on cloud computing systems, thereby necessitating high-speed computing capabilities. In this context, HBM undeniably stands as the superior solution for constructing high-speed computing platforms due to its superior bandwidth and lower energy consumption when compared to DDR SDRAM.

This distinction becomes evident when comparing DDR4 SDRAM and DDR5 SDRAM, which were released in 2014 and 2020 respectively, with only a two-fold difference in bandwidth. Irrespective of whether DDR5 or the future DDR6 is employed, the pursuit of higher transmission performance is bound to result in increased power consumption, potentially impacting system performance negatively.

By way of illustration, HBM3 exhibits a bandwidth 15 times greater than DDR5 and can be further enhanced by incorporating more stacked chips. Moreover, HBM can replace a portion of GDDR SDRAM or DDR SDRAM, effectively managing power consumption.

TrendForce concludes that the current surge in demand is primarily driven by AI servers equipped with Nvidia A100, H100, AMD MI300, as well as major CSPs such as Google and AWS, which are developing their own ASICs.

It is estimated that the shipment volume of AI servers, including those equipped with GPUs, FPGAs, and ASICs, will reach nearly 1.2 million units in 2023, indicating an annual growth rate of almost 38 percent. Additionally, TrendForce anticipates a concurrent surge in the shipment volume of AI chips, with growth potentially exceeding 50 percent.

Exit mobile version