Advanced Micro Devices (AMD) has revealed new details about its upcoming artificial intelligence chip, positioning it as a competitor to market leader Nvidia. However, AMD did not disclose any information about potential buyers.
Santa Clara, California-based AMD stated that the chip, set to be released in limited quantities in the third quarter and mass-produced in the fourth quarter, will boast 192 gigabytes of memory.
AMD CEO Lisa Su said this significant memory capacity could assist technology companies in managing the escalating costs of delivering services comparable to ChatGPT. Su discussed the chip’s capabilities during a keynote presentation in San Francisco, where she showcased an AI system on the MI300X chip writing a poem about the city.
“The more memory that you have, the larger the set of models” the chip can handle, Su explained. “We’ve seen in customer workloads that it runs much faster. We really do think it’s differentiating.”
However, unlike previous presentations where AMD highlighted major customers for new chips, the company refrained from disclosing who will adopt the MI300X or its smaller counterpart, the MI300A. No information was provided regarding the chip’s pricing or sales strategy.
Although AMD’s shares have doubled in value since the beginning of the year and reached a 16-month high earlier on Tuesday, they closed down 3.6 percent after the AI strategy presentation. Conversely, Nvidia shares finished 3.9 percent higher at $410.22, making Nvidia the first chipmaker to achieve a market capitalization above $1 trillion, Reuters news report said.
“The lack of a (large customer) saying they will use the MI300 A or X may have disappointed the Street. They want AMD to say they have replaced Nvidia in some design,” noted Kevin Krewell, principal analyst at TIRIAS Research.
Nvidia currently dominates the AI computing market with an estimated market share of 80 percent to 95 percent, and its shares have surged by 170 percent so far this year, solidifying its position. While Intel and several startups like Cerebras Systems and SambaNova Systems have competing products, Nvidia’s primary competition has emerged from the internal chip endeavors of Google (Alphabet Inc) and Amazon.com’s cloud unit, both of which rent their custom chips to external developers.
In addition to its AI market aspirations, AMD announced that it has begun shipping high volumes of a general-purpose central processor chip named “Bergamo” to companies such as Meta Platforms. Alexis Black Bjorlin, who oversees computing infrastructure at Meta, confirmed that the firm has adopted the Bergamo chip, which targets a different segment of AMD’s data center business, catering to cloud computing providers and other large chip buyers.
While investors sought news about AI, Nvidia’s lead in the sector is not solely due to its chips but also its extensive experience in providing software tools to AI researchers. Nvidia has excelled in anticipating the needs of AI researchers when designing chips, accumulating over a decade of expertise in the field.
During the presentation, AMD provided updates on its Rocm software, which competes with Nvidia’s Cuda software platform. Soumith Chintala, a Meta vice president involved in creating open-source software for artificial intelligence, mentioned working closely with AMD to simplify the transition for AI developers from the “single dominating vendor” of AI chips to other options like those offered by AMD. Chintala stated that switching platforms would require minimal effort in many cases.
However, analysts cautioned that while sophisticated companies like Meta can achieve good performance from AMD chips, it does not guarantee broader market traction among less technically adept buyers.
“People still aren’t convinced that AMD’s software solution is competitive with Nvidia’s, even if it is competitive on the hardware performance side,” remarked Anshel Sag, an analyst at Moor Insights & Strategy.