
When billion-dollar AIs break down over puzzles a child can do, it’s time to rethink the hype
A recently published research paper casts doubt on claims about the abilities of a new generation of artificial intelligence, or AI, technology. The paper by the Apple company raises the warning that large reasoning models, or LLMS, are not proving to be capable of consistently reasoning reliably. The research paper reinforces claims previously made by Subbarao Kambhampati, a professor in the School of Computing and Augmented Intelligence, part of the Fulton Schools. He observed that people tend to assume these AI systems do something resembling steps humans might take when solving a challenging problem, but he shows these LLMs have the same kinds of problems Apple documented in its paper.
See related articles in which Kamhampati is quoted :
Apple Is Pushing AI Into More of Its Products—but Still Lacks a State-of-the-Art Model WIRED, June 9
Meta Is Building a Superintelligence Lab. What Is That? The New York Times, June 13