Blog

OpenAI just launched its smartest AI yet that can think with images — here’s how to try it

OpenAI just released two updated AI models — o3 and o4-mini — for ChatGPT Plus, Pro and Team users. Essentially two new, bigger and better brains, these models are said to be the smartest ones yet because they can tackle more advanced queries, understand the blurriest images, and solve problems like never before.

This release comes just a few days after OpenAI announced that ChatGPT is getting a major upgrade to its memory features, aimed at making conversations even more personal, seamless and context-aware.

With ChatGPT retiring GPT-4 at the end of this month, the release of these new models underscore OpenAI’s broader push to make ChatGPT feel less like a one-off assistant and more like a long-term, adaptable tool that evolves with its users.

More advanced multimodal capabilities

(Image credit: ChatGPT)

These models are the most advanced yet, capable of interpreting both text and images, including lower-quality visuals such as handwritten notes and blurry sketches. Users can upload diagrams or whiteboard photos, and the models will incorporate them into their responses.

The models also support real-time image manipulation, such as rotating or zooming, as part of the problem-solving process.

OpenAI logo with robotic human head

(Image credit: Shutterstock)

For the first time, the models can independently use all of ChatGPT’s tools, including the browser, Python code interpreter, image generation and image analysis. This means the AI can decide which tools to use based on the task given, potentially making it more effective for research, coding, and visual content creation.


Source link

Related Articles

Back to top button
close