AMD’s Advancing AI conference is one of the shortest flagship events I’ve been to, certainly this year and possibly ever, but still managed to be fairly action-packed.
The one day event was centered around CEO Lisa Su’s two-hour keynote presentation, in which she was joined on stage by numerous special guests – including a surprise appearance from OpenAI CEO Sam Altman.
I’ll return to these guests in a moment, but first let’s dig down into the news announcements. The biggest news was Helios, a double-wide, rack scale offering from AMD that will be available next year, which Su described as a “game changer” for the company.
To say there was a lot of pointing to graphs where AMD’s devices either match or exceed Nvidia’s comparable offerings would be an extraordinary understatement. In Su’s keynote we got to see a slide claiming that the Instinct MI355X chip – also announced at the event – is 1.2x faster at DeepSeek R1 throughput and 1.3x faster at Llama 3.1 405B throughput than Nvidia’s B200, and offers the same throughput as the Nvidia GB200 on Llama 3.1 405B. This was followed by a claim that the MI355X offers ‘up to’ 40% more tokens per dollar than the B200.
This was followed by similar slides showing parity or slight gains on LLM training and fine tuning, while in the press conference we were treated to further slides showing greater memory capacity, memory bandwidth, and peak performance than Nvidia GB200 or B200.
As we’re not able to independently verify these claims, I’m not going to repeat them all verbatim, but needless to say there was a strong “better than Nvidia, actually” theme.
In many ways this is to be expected. Like it or not, AMD and Intel are both playing catch-up with Nvidia, a company led by perhaps one of the most charismatic CEOs out there.
As Forrest Norrod, EVP and GM of AMD’s data center solutions business unit, told a press round table: “Nvidia has done a great job of convincing the industry that we’re living in the future, and so that whatever… they announce is actually here today, so we have to get a little bit more aggressive.”
Norrod also admitted that “Nvidia is the de facto standard right now,” but added that Helios will be a “good solution” for hyperscalers, tier two cloud providers, and neo clouds alike. He was also at pains to point out that Helios is “not the only thing [AMD is] doing”, however, and while he didn’t expand on this previous slides had shown plans for future rack-scale architecture going into 2027, as well as an annual chip release plan going up to the Instinct MI500X range. Watch this space, I suppose.
Embracing the open ecosystem
The final key talking point for Su and other members of the AMD leadership was the importance of openness. Su said during her keynote that AMD is “investing heavily in an open, developer-first ecosystem,” adding that the company is “really supporting every major framework, every library, and every model to bring the industry together in open standards so that everyone can contribute to AI innovation”.
Su drew a parallel between the self-declared open approach that AMD is taking and Linux surpassing Unix as the data center operating system of choice “when global collaboration was unlocked”. She also pointed to the open source Android operating system as one of the drivers in increasing the number of people who own smart phones. Indeed, the latest figures from Statista at the time of writing show Android had a 72.7% market share, while iOS is just 26.9%, despite iPhones being the most popular device.
As mentioned above, many of the graphs in Su’s presentation leaned on the throughput performance of Llama – Meta’s ostensibly open source AI platform – and Vamsi Boppana, SVP of AI at AMD, pointed out that 1.8 million Huggingface models now run on the company’s ROCm platform.
What’s next for AMD
It’s hard to say when the next AMD Advancing AI event will take place – typically companies this size run their conferences annually, but only nine months had passed between the 2024 event and this most recent one. Nevertheless, the company has given a clear indication of what its plans are for the next few years at least.
Su committed to an annual release cycle for its Instinct GPU range, which is unlikely to be broken. We can expect Helios to ship in 2026 and a second rack-scale product to come in 2027. What the full specs of all those upcoming products are – and how much interest there is in Helios in particular – remains to be seen. One thing is certain, though – Nvidia may be the “de facto standard” but AMD’s spoiling to take its crown.
Source link