OpenAI Launches o3-pro, Its Most Capable Reasoning Model Yet

On Wednesday, OpenAI released o3-pro, its most capable reasoning model on ChatGPT. In April 2025, the company released the standalone o3 model along with o4-mini. The o3-pro model uses the same o3 as the underlying model, but runs in a high-compute mode, which uses more computing power and extended thinking time to solve harder problems.

OpenAI says o3-pro offers better performance than the o3 model in many areas, especially in science, education, programming, data analysis, and writing. When compared against o3, expert testers consistently preferred o3-pro’s output over the o3 model.

As o3-pro is a reasoning model, it excels in math, science, and coding. In the AIME 2024 test, o3-pro scored 93%; in GPQA Diamond, o3-pro achieved 84%, and in Codeforces, the o3-pro model got a ranking of 2,748. In all these benchmarks, o3-pro outperformed its underlying o3 model, thanks to test-time compute.

What is interesting is that o3-pro has access to multiple tools inside ChatGPT such as web browsing, file analysis, visual analysis, Python interpreter, memory, and more. By the way, o3-pro is replacing o1-pro on ChatGPT, and it’s rolling out to ChatGPT Pro and Team users. Sadly, ChatGPT Plus users can’t access this high-compute mode.

In the next week, Enterprise and Edu users will get access to the o3-pro model. Besides ChatGPT, o3-pro is also available in the API, and its pricing is pretty surprising. It costs $20 for input and $80 for output, per 1 million tokens. You can process up to 200,000 tokens in a single context window, and the knowledge cutoff date is June 1, 2024.

Arjun Sha

Passionate about Windows, ChromeOS, Android, security and privacy issues. Have a penchant to solve everyday computing problems.



Source link
Exit mobile version