Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

‘Twas the night before GPT-5, when all through the social-media-sphere, Not a creature was posting, not even @paulg nor @eshear

Next morning’s posts were prepped and scheduled with care, In hopes that AGI soon would appear …



Unless someone figures how to make these models a million(?) times more efficient or feed them a million times more energy I don’t see how AGI would even be a twinkle in the eye of the LLM strategies we have now.


Hey man don’t bring that negativity around here. You’re killing the vibe. Remember we’re now in a post-facts timeline!


To kill the vibe further, AGI might kill is all, so I hope it never arrives.


Based on our behavior, personally, I think we’d deserve it.


If you've done something deserving of death, you're welcome to turn yourself in.


Can I opt out of that cohort?


> Unless someone figures how to make these models a million(?) times more efficient or feed them a million times more energy I don’t see how AGI would even be a twinkle in the eye of the LLM strategies we have now.

A fair argument. So what is left? At the risk of sounding snarky, "new" strategies. Hype is annoying, yes, but I wouldn't bet against mathematics, physics, and engineering getting to silicon-based AGI, assuming a sufficiently supportive environment. I don't currently see any physics-based blockers; the laws of the universe permit AGI and more, I think. The human brain is powerful demonstration of what is possible.

Factoring in business, economics, culture makes forecasting much harder. Nevertheless, the incentives are there. As long as there is hope, some people will keep trying.


I agree with everything you said. It’s a worthy pursuit. I would love to see breakthroughs but even incremental progress is great. If we’re near a limit that we haven’t understood yet I won’t be shocked. At the same time if I hear about this replacing programmers again…


Again, putting aside hype, assuming machine intelligence increases over time (it doesn't have to steady, linear, exponential, or any particular curve), the economic value of a human being is decreased.

What kinds of scenarios emerge as corporations and governments build more advanced AI systems? Consumer preferences will matter to some degree, in the aggregate, but this may not resemble the forms of democratic influence we might prefer.

At some point, it might be likely that even a massive popular backlash isn't enough to change the direction very much. A "machine takeover" is not necessary -- the power provided by intelligence is sufficiently corrupting on its own. This is a common thread through history -- new technologies often shift power balances. The rapid rise of machine intelligence, where that intelligence can be copied from one machine to another, is sufficiently different from other historical events that we should think very hard about just how f-ing weird it could get.

To what degree will the dominant human forces use AI to improve the human condition? One lesson from history is that power corrupts. If one group gets a significant lead over the others, the asymmetry could be highly destabilizing.

It gets worse. If the machines have unaligned goals -- and many experts think this may be unavoidable (though we must keep trying to solve the alignment problem) -- what happens as they get more capable? Can we control them? Contain them?

But under what conditions do the humans continue to call the shots? Under what conditions might the machines out think, out compete, or even out innovate their human designers?

This isn't science fiction: AI systems have already been shown to try to cheat and "get out of their box". It only takes one sufficiently big mistake. Humans tend to respond a bit slowly to warning shots. We might get some number of warning shots if we're lucky, and we might get our act together in time. But I wouldn't assume this. We had better get our shit together before something like this happens.

I encourage everyone to take a few hours and think deeply through various scenarios (as if you were building a computer security attack tree) and assign probability ranges to their occurrence. This might open your eyes a bit.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: