It's vaguely defined and the goalposts keep shifting. It's not a thing to be achieved, it's an abstract concept. We're already expired the Turing test as a valuable metric because people are dumb and have been fooled by machines for a while now, but it's not been world-changingly better either.
I've yet to hear an agreed upon criteria to declare whether or not AGI has been discovered. Until it's at least understood what AGI is and how to recognize it then how could it possibly be achieved?
I think OpenAI's definition ("outperforms humans at most economically valuable work") is a reasonably concrete one, even if it's arguable that it's not 'the one true form of AGI'. That is at least the "it will completely change almost everyone's lives" point.
(It's also one that they are pretty far from. Even if LLMs displace knowledge/office work, there's still all the actual physical things that humans do which, while improving rapidly with VLMs and similar stuff, is still a large improvement in the AI and some breakthroughs in electronics and mechanical engineering away)
It's overly strong in some ways (and weak in a few), yes. Which is why I said it's not a "one true definition", but a concrete one which, if reached, would well and truly mean that it's changed the world.
I think a good threshold, and definition, is when you get to the point where all the different, reasonable, criteria are met, and when saying "that's not AGI" becomes the unreasonable perspective.
> how could it possibly be achieved?
This doesn't matter, and doesn't follow the history of innovation, in the slightest. New things don't come from "this is how we will achieve this", otherwise they would be known things. Progress comes from "we think this is the right way to go, let's try to prove it is", try, then iterate with the result. That's the whole foundation of engineering and science.
This is scary because there have already been AI engineers saying and thinking LLMs are sentient, so what’s unreasonable could be a mass false-belief, fueled by hype. And if you ask a non-expert, they often think AI is vastly better than it really is, able to pull data out of thin air.
How is that scary, when we don’t have a good definition of sentience?
Do you think sentience is a binary concept or a spectrum? Is a gorilla more sentient than a dog? Are all humans sentient, or does it get somewhat fuzzy as you go down in IQ, eventually reaching brain death?
Is a multimodal model, hooked to a webcam and microphone, in a loop, more or less sentient than a gorilla?
There may not be a universally agreed upon threshold for the minimum required for AGI, but there's certainly a point where if you find yourself beyond it then AGI definitely has been developed.
There are some thresholds where I think it would be obvious that a machine has.
Put the AI in a robot body and if you can interact with it the same way you would interact with a person (ie you can teach it to make your bed, to pull weeds in the garden, to drive your car, etc…) and it can take what you teach it and continually build on that knowledge, then the AI is likely an instance of AGI.