Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Any discussion about AGI requires a written definition of the term to have a reasonable discussion.

I agree, this is the hardest thing to pin in any discussion. What version of AGI are we even talking about?

Here's a definition / explanation of AGI from the early days of DeepMind (from the movie "The thinking game"):

Quote from Shane Legg: "Our mission was to build an AGI - an artificial general intelligence, and so that means that we need a system which is general - it doesn't learn to do one specific thing. That's really key part of human intelligence, learn to do many many things".

Quote from Hassabis: "So, what is our mission? We summarise it as <Build the world's first general learning machine>. So we always stress the word general and learning here the key things."

And the key slide (that I think cements the difference between what AGI stood for then, vs. now):

AI - one task vs. AGI - many tasks

at human level intelligence.

----

Now, if we go by this definition, which is pretty specific and clear, I think we've already achieved this. We already have systems that have "generally" learned stuff. And can do "many tasks" at "human level intelligence". Again, notice the emphasis on "general" and "learning". We have a learning machine, that takes in vast amounts of tokens (text, multimodal, even bytes at the end of the day) and "learns" to "do" many things. And notice it's many tasks, not all tasks. I think this is QED at this point.

But, due to the old problem of "AI is everything that hasn't been done yet", and the constant goalpost moving, together with lots and lots of writing on this topic, the waters are muddier today, and lots of people argue and emphasise different things in the AGI field.

> Fortunately there is a lot of practical utility without AGI

Yeah, completely agree. I'm with Simon's recent article on this one. It doesn't even matter at this point if we reach AGI or not, or who's definition we use. I get a lot of value today from these systems. The debates are moot from my point.






> Now, if we go by this definition, which is pretty specific and clear...

I was going to say, no, you've defined "general" pretty well, but "intelligence" you didn't define at all. But on second thought, I guess you did - learning.

I might amend that slightly. It might be learning to do. I don't care if it can learn the words about, say, chemistry. Can it learn to solve chemistry problems?

The remaining area of fuzziness is hidden in "at human level". At what human level? I took a year of college chemistry. Can it do chemistry at that level? How about at the level of someone with a BS in chemistry? A PhD? Those are all "human" levels, but they are very different.

If it can do, say, all college subjects at undergrad level... I guess that's the benchmark for "a well rounded human".

> I think we've already achieved this.

I want to think about it some more before I definitely agree, but you've made the best case that I have seen.

The flaw I think I see is that, from a well-rounded education, we expect a human to be able to specialize, to become an expert in something. I'm not sure LLMs are quite there yet. They're closer than I was thinking 10 minutes ago, though.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: