I think it depends on how you define intelligence, but _I_ mostly agree with Francois Collet's stance that intelligence is the ability to find novel solutions and adaptability to new challenges. He feels that memorisation is an important facet, but it is not enough for true intelligence ant that these systems excel at type2 thinking but gave huge gaps at type1.
The alternative I'm considering is that It might just be that it's just a dataset problem, feeding these llms on words makes the lack a huge facet of embodied axistance that is needed to get context.
A LLM has to do an accurate simulation of someone critically evaluating their statement in order to predict a next word.
If an LLM can predict the next word without doing a critical evaluation, then it raises the question of what the intelligent people are doing. They might not be doing a critical evaluation at all.
> If an LLM can predict the next word without doing a critical evaluation, then it raises the question of what the intelligent people are doing
Well certainly: in the mind ideas can be connected tentatively by affinity, and they become hypotheses of plausible ideas, but then in the "intelligent" process they are evaluated to see if they are sound (truthful, useful, productive etc.) or not.
Intelligent people perform critical evaluation, others just embrace immature ideas passing by. Some "think about it", some don't (they may be deficient in will or resources - lack of time, of instruments, of discipline etc.).
The poster wrote that prediction of the next token seems like intelligence to him. He was replied that consideration over content is required. You are now stating that it is not proven it will not happen. But the point was that prediction of the next token is not the intelligence sought, and if and when the intelligence sought will happen, that will be a new stage - the current stage we do not call intelligence.
I have some experience with LLM, and they definitely do consider the question. They even seem to do simple logical inference.
They are not _good_ at it right now, and they are totally bad at making generalizations. But who says it's not just an artifact of the limited context?
Why? I think it absolutely can be intelligence.