SK Hynix, the renowned semiconductor manufacturer, has revealed a significant leap forward in the field of AI memory technology.
The South Korea-based company announced the development of HBM3E, an advanced iteration of its groundbreaking High Bandwidth Memory (HBM) series. HBM3E sets new standards for AI applications and data processing capabilities, solidifying SK Hynix’s dominance in the AI memory market.
HBM3E stands as the pinnacle of DRAM innovation, outshining its predecessors HBM, HBM2, HBM2E, and HBM3. Vertical interconnections between multiple DRAM chips propel data processing speeds to unprecedented levels, offering a substantial enhancement compared to earlier DRAM iterations. In terms of specifications, HBM3E not only raises the bar for processing speed but also excels in capacity, heat dissipation, and user-friendliness.
One of the most impressive features of HBM3E is its remarkable processing speed, boasting an astonishing data throughput of 1.15 terabytes per second. To put this into perspective, the memory is capable of processing over 230 Full-HD movies, each with a size of 5GB, in just a single second. This unparalleled speed propels AI applications to new heights of efficiency and performance.
Moreover, HBM3E introduces a groundbreaking advancement in heat dissipation through its integration of Advanced Mass Reflow Molded Underfill (MR-MUF) technology. This cutting-edge approach ensures a 10 percent improvement in heat dissipation, contributing to the memory’s reliability and performance in demanding AI workloads. Furthermore, HBM3E showcases backward compatibility, allowing seamless integration into systems originally designed for HBM3 without the need for any modifications.
Industry experts have expressed their anticipation for the new technology. Ian Buck, Vice President of Hyperscale and HPC Computing at NVIDIA, commended SK Hynix’s track record in high-performance memory solutions and expressed excitement for the collaborative potential in the realm of AI computing.
Research firm TrendForce says NVIDIA has the highest market share in AI server accelerator chips. The high costs associated with NVIDIA’s H100/H800 GPUs, priced at between $20,000 and $25,000 per unit, coupled with an AI server’s recommended eight-card configuration, have increased the total cost of ownership. While CSPs will source server GPUs from NVIDIA or AMD, they are planning to develop their own AI accelerator chips.
Sungsoo Ryu, Head of DRAM Product Planning at SK Hynix, emphasized the company’s commitment to market leadership through continuous innovation. Ryu highlighted the significance of HBM3E in SK Hynix’s expanding HBM product lineup, particularly in light of the rapid advancements in AI technology. He also noted that the introduction of HBM3E would drive business growth and accelerate market penetration.
With plans to commence mass production of HBM3E in the first half of next year, SK Hynix is poised to maintain its unrivaled position as a pioneer in AI memory solutions. As industries increasingly rely on powerful AI applications, SK Hynix’s HBM3E emerges as a key player in shaping the future of AI-driven technology.