In his recent "Intelligence Age" post, Altman says superintelligence may be only a few thousand days out. This might, of course, be wrong, but skyrocketing demand for chips is a straightforward consequence of taking it seriously.
This is actually quite clever phrasing. "A few thousand days" is about ten years, assuming normal usage of 'few' (ie usually a number between 3 and 6 inclusive).
Now, if you, as a tech company, say "X is ten years away", anyone who has been around for a while will entirely disregard your claim, because forward-looking statements in that range by tech companies are _always_ wrong; it's pretty much a cliche. But phrasing as a few thousand days may get past some peoples' defences.
The mistake isn't thinking 'scaling is the solution to AGI'.
And the mistake isn't thinking more generally about 'the solution to AGI'.
The mistake is thinking about 'AGI'.
There will never be an artificial general intelligence. There will never artificial intelligence, full stop.
It's a fun concept in science fiction (and earlier parallel concepts in fantasy literature and folk tales). It's not and will never be reality. If you think it can be then either you are suffering from 'science fiction brain' or you are a fraud (Sam Altman) or you are both (possibly Sam Altman again).
Demand for compute will skyrocket given AGI even if AGI turns out to be relatively compute-efficient. The ability to translate compute directly into humanlike intelligence simply makes compute much more valuable.
Since AGI isn't here yet, the eventual implementation that breaks through might be based on different technology; for example, if it turns out to need quantum computing, investing lots of money to build out current fabs might turn out useless.
Input and output, given that they must connect with the physical world, seems to me to be the likely limiting resource, unless you think isolated virtual worlds will have value on to themselves
An AGI can presumably control a robot at least as well as a human operator can. The hardware side of robotics is already good enough that we could leverage this to rapidly increase industrial output. Including, of course, producing more AGI-controlled robots. So it may well be the case that robot production, rather than chip production, becomes the bottleneck on output growth, but such growth will still be extremely fast and will still drive demand for far more computing capacity than we're producing today.
And I suppose you are assuming that the robots will mine and refine the metal ore themselves, and then also dig the foundations for the factories that house their manufacturing?