Are reasoning models fundamentally flawed?

A report from Apple has cast significant doubts on the efficacy of reasoning models, going as far as to suggest that when a problem is too complex, they simply give up.
AI reasoning models have emerged in the past year as a beacon of hope for large language models (LLMs), with AI developers such as OpenAI, Google, and Anthropic selling them as the go-to solution for solving the most complex business problems.
However, a new research paper by Apple has cast significant doubts on the efficacy of reasoning models, going as far as to suggest that when a problem is too complex, they simply give up. What’s going on here? And does it mean reasoning models are fundamentally flawed?
In this episode, Rory Bathgate speaks to ITPro’s news and analysis editor Ross Kelly to explain some of the report’s key findings and what it means for the future of AI development.
Highlights
“It sounds a bit obvious, but at the same time, the way that this has been, framed by a lot of providers, makes it seem like they’re the be-all and end-all. But once you reach a level of complexity, yeah, you start to encounter some serious problems.”
“What’s interesting for me, right off the bat, is, what Apple is showing is that that’s actually the opposite of what these are good at, that they are maybe more performant, maybe they’re slightly better at solving complex problems up to a point, but that if you actually try and give them really complex problems, they are completely ineffective.”
“Apple is claiming that OpenAI hasn’t made a model that can think, to which my reaction was, did you think OpenAI had made a model that can think? “
TOPICS
Source link