AMD rolls out open-source OLMo LLM, to compete with AI giants – Computerworld


Competitive performance and benchmark success

In internal benchmarks, AMD’s OLMo models performed well against similarly sized open-source models, such as TinyLlama-1.1B and OpenELM-1_1B, in multi-task and general reasoning tests, the company claimed. Specifically, its performance increased by over 15% on tasks in GSM8k, a substantial gain attributed to AMD’s multi-phase supervised fine-tuning and Direct Preference Optimization (DPO). ‘

In multi-turn chat tests, AMD claimed, OLMo showed a 3.41% edge in AlpacaEval 2 Win Rate and a 0.97% gain in MT-Bench over its closest open-source competitors.

However, when looking at the broader LLM landscape, Nvidia’s GH200 Grace Hopper Superchip and H100 GPU remain leaders in LLM processing, particularly for large, multi-faceted AI workloads. Nvidia’s focus on innovations like C2C link, which accelerates data transfer between its CPU and GPU, gives it an edge, providing a speed advantage for high-demand inference tasks such as recommendation systems.


Source link
Exit mobile version