Foxconn plans to build the largest production facility for Nvidia’s GB200 chips. This move comes as the company aims to meet the overwhelming demand for Nvidia’s Blackwell AI platform , driven by the rapid expansion of artificial intelligence technology.
Benjamin Ting, Senior Vice President of Foxconn’s Cloud Enterprise Solutions Business Group, revealed the development at Foxconn’s annual tech day in Taipei. Although he refrained from disclosing the location of the new facility, Foxconn emphasized that this project would position Foxconn as a major player in Nvidia’s supply chain for AI chips.
Nvidia’s Blackwell platform, which powers high-performance AI applications, has become one of the most sought-after technologies in the booming AI industry. Foxconn’s involvement in manufacturing these chips highlights the company’s growing role beyond its core iPhone assembly operations.
NVIDIA is set to revolutionize its AI and data center product lineup with the introduction of the Blackwell platform, expected to ramp up production in 2025. This new platform, with a die size double that of its predecessor Hopper, is anticipated to deliver groundbreaking performance improvements and cater to the growing demand for AI computing power across industries.
NVIDIA’s data center business is already on a record-breaking trajectory, with revenues more than doubling in the second quarter of its 2025 fiscal year, reaching $30 billion. This surge is driven by high demand for NVIDIA’s Hopper-based GPUs, such as the H100 and H200, with the latter set to take over as the flagship GPU starting from the third quarter of 2024. The Blackwell platform is expected to follow, becoming a key pillar of NVIDIA’s AI and data center offerings.
TrendForce reports that TSMC, the key supplier of CoWoS packaging solutions for NVIDIA, is preparing for this shift by doubling its monthly capacity to around 70–80K units. NVIDIA is set to dominate more than half of this capacity as the Blackwell platform reaches mass production.
Meanwhile, the Blackwell chips will fully adopt HBM3e memory technology, already implemented in the H200 GPU. Leading suppliers like Micron, SK hynix, and Samsung are gearing up for the new memory demands, with SK Hynix and Micron already in full production of HBM3e for the H200 and Blackwell chips. This positions the platform to meet the escalating need for high-performance AI servers, especially as NVIDIA continues to drive advancements in large language models (LLMs), search engines, and chatbots.
With the Blackwell platform poised to transform AI infrastructure, NVIDIA’s dominance in the data center market looks to continue well into 2025 and beyond.
Foxconn’s move to build this large-scale production facility follows a trend of tech companies capitalizing on the surge in AI demand, particularly for servers and data processing equipment. Nvidia’s Vice President for AI and Robotics, Deepu Talla, appeared at the event and further highlighted the importance of the collaboration, although Nvidia CEO Jensen Huang was notably absent.
The new facility is expected to help alleviate some of the supply constraints currently affecting the AI chip market and bolster Foxconn’s standing in the industry as a key supplier of AI infrastructure.
Baburajan Kizhakedath