Below you’ll find a quote from The Eric Topol X-account, which he has taken from a recent nature article on AI in medicine (paywalled):
GPT-5 represents a meaningful advance: fewer hallucinations, better reasoning benchmarks, and stronger rule-following in its best variant. However, it remains a probabilistic text generator, not a reasoning engine. Whether next-token prediction can support robust, generalizable reasoning is still debated. Until that question is resolved, the most insidious risk may be the hardest to detect: the illusion of understanding. In medicine and public health, in which decisions carry life-or-death stakes, that illusion can be as dangerous as outright error.
A reasonable (sic) argument I guess. I wonder though: Isn’t [human] understanding (i.e. reasoning, generalizable or not) also based on next token prediction ? The difference being that our tokens aren’t necessarily limited to text. The illusion may be that human reasoning is something magical.
What is outright dangerous is bad performing systems of next token predictions. In my experience, those systems can be both human and artificial.

