The trouble with looking at past examples of new tech and automation is that those were all verticals - the displaced worker could move to a different, maybe newly created, work area left intact by the change.
Where AI will be different (when we get there - LLMs are not AGI) is that it is a general human-replacement technology meaning there will be no place to run ... They may change the job landscape, but the new jobs (e.g. supervising AIs) will ALSO be done by AI.
I don't buy this "AGI by 2027" timeline though - LLMs and LLM-based agents are just missing so many basic capabilities compared to a human (e.g. ability to learn continually and incrementally). It seems that RL, test-time compute (cf tree search) and agentic application, have given a temporary second wind to LLMs which were otherwise topping out in terms of capability, but IMO we are already seeing the limits of this too - superhuman math and coding ability (on smaller scope tasks) do not translate into GENERAL intelligence since they are not based on general mechanism - they are based on vertical pre-training in these (atypical in terms of general use case) areas where there is a clean reward signal for RL to work well.
It seems that this crazy "we're responsibly warning you that we're going to destroy the job market!" spiel is perhaps because these CEOs realize there is a limited window of opportunity here to try to get widespread AI adoption (and/or more investment) before the limitations become more obvious. Maybe they are just looking for an exit, or perhaps they are hoping that AI adoption will be sticky even if it proves to be a lot less capable that what they are promising it will be.
Where AI will be different (when we get there - LLMs are not AGI) is that it is a general human-replacement technology meaning there will be no place to run ... They may change the job landscape, but the new jobs (e.g. supervising AIs) will ALSO be done by AI.
I don't buy this "AGI by 2027" timeline though - LLMs and LLM-based agents are just missing so many basic capabilities compared to a human (e.g. ability to learn continually and incrementally). It seems that RL, test-time compute (cf tree search) and agentic application, have given a temporary second wind to LLMs which were otherwise topping out in terms of capability, but IMO we are already seeing the limits of this too - superhuman math and coding ability (on smaller scope tasks) do not translate into GENERAL intelligence since they are not based on general mechanism - they are based on vertical pre-training in these (atypical in terms of general use case) areas where there is a clean reward signal for RL to work well.
It seems that this crazy "we're responsibly warning you that we're going to destroy the job market!" spiel is perhaps because these CEOs realize there is a limited window of opportunity here to try to get widespread AI adoption (and/or more investment) before the limitations become more obvious. Maybe they are just looking for an exit, or perhaps they are hoping that AI adoption will be sticky even if it proves to be a lot less capable that what they are promising it will be.