I think that someone or something to be actually super intelligent like humans it needs to be conscious first, and the whole underlying foundation of software AI is wrong or not suitable to scale to the point of super intelligence. You wont see AI evolution like we saw life evolution here on Earth.
Humans are bad at multitasking but computers excel at it, that's why machine/deep learning is so impressive. Computers thrive on parallel computing while humans like I said suck at it and therefore Yann is right; LLMs are information retrieval super beasts who are programmed to predict and not to make independent decisions and tinker about anything in particular.
The example he uses seems very apples and oranges to me. I remember a scene from Age of Ultron where 30 seconds after Ultron becomes aware he reasons that humanity must be destroyed. It's just a movie but that is more closely aligned to the reality people are worried about.
AI would eradicate humans for what? Just for lolz or it has any particular long term plan in mind? I would rather say AI could screw up while trying to help humans, we already saw that when badly designed or implemented AI algorithms screw up.
AI safety is of existential importance to the human race. AI safety is used by market incumbents to create an artificial moat. Both things can be true.
Lots more discussion: https://news.ycombinator.com/item?id=40390831
https://news.ycombinator.com/item?id=40391382
And Jan's post among others:
https://news.ycombinator.com/item?id=40391412