Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> A prediction of the next token being better isnt intelligence

Why? I think it absolutely can be intelligence.



I think it depends on how you define intelligence, but _I_ mostly agree with Francois Collet's stance that intelligence is the ability to find novel solutions and adaptability to new challenges. He feels that memorisation is an important facet, but it is not enough for true intelligence ant that these systems excel at type2 thinking but gave huge gaps at type1.

The alternative I'm considering is that It might just be that it's just a dataset problem, feeding these llms on words makes the lack a huge facet of embodied axistance that is needed to get context.

I am a nobody though, so who knows....


I agree, LLM are interesting to me only to the extent that they are doing more than memorisation.

They do seem to do generalisation, to at least some degree.

If it was literal memorisation, we do literally have internet search already.


And who says that LLMs won't be able to adapt to new conditions?

Right now they are limited by the context, but that's probably a temporary limitation.


*Chollet


Intelligence implies a critical evaluation of the statement under examination, before stating it, on considerations over content.

("You think before you speak". That thinking of course does not stop at "sounding" proper - it has to be proper in content...)


A LLM has to do an accurate simulation of someone critically evaluating their statement in order to predict a next word.

If an LLM can predict the next word without doing a critical evaluation, then it raises the question of what the intelligent people are doing. They might not be doing a critical evaluation at all.


> If an LLM can predict the next word without doing a critical evaluation, then it raises the question of what the intelligent people are doing

Well certainly: in the mind ideas can be connected tentatively by affinity, and they become hypotheses of plausible ideas, but then in the "intelligent" process they are evaluated to see if they are sound (truthful, useful, productive etc.) or not.

Intelligent people perform critical evaluation, others just embrace immature ideas passing by. Some "think about it", some don't (they may be deficient in will or resources - lack of time, of instruments, of discipline etc.).


> Intelligence implies a critical evaluation of the statement under examination, before stating it, on considerations over content.

And who says LLM are not able to do that (eventually)?


The poster wrote that prediction of the next token seems like intelligence to him. He was replied that consideration over content is required. You are now stating that it is not proven it will not happen. But the point was that prediction of the next token is not the intelligence sought, and if and when the intelligence sought will happen, that will be a new stage - the current stage we do not call intelligence.


I have some experience with LLM, and they definitely do consider the question. They even seem to do simple logical inference.

They are not _good_ at it right now, and they are totally bad at making generalizations. But who says it's not just an artifact of the limited context?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: