Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Noam Chomsky and Doug Hofstader had the same opinion. Last I checked Doug has recanted his skepticism and is seriously afraid for the future of humanity. I’ll listen to him and my own gut than some random internet people still insisting this is all a nothing burger.



The thing is my gut is telling me this is a nothing burger, and I'll listen to my own gut before yours - a random internet person insisting this is going to change the world.

So what exactly is the usefulness of this discussion? You think "I'll trust my gut" is a useful argument in a debate?


Trusting your gut isn't a useful debate tactic, but it is a useful tool for everybody to use personally. Different people will come to different conclusions, and that's fine. Finding a universal consensus about future predictions will never happen, it's an unrealistic goal. The point of the discussion isn't to create a consensus; it's useful because listening to people with other opinions can shed light on some blind spots all of us have, even if we're pretty sure the other guys are wrong about all or most of what they're saying.

FWIW my gut happens to agree with yours.


I'm convinced that the "LLMs are useless" contingent on HN is just psychological displacement.

It hurts the pride of technical people that there's a revolution going on that they aren't involved in. Easier to just deny it or act like it's unimpressive.


Or it's technical people who have been around for a few of these revolutions, which revolved and revolved until they disappeared into nothing but a lot of burned VC, to recognise the pattern? That's where I'd place my own cynicism. My bullshit radar has proven to be pretty reliable over the past few decades in this industry, and it's been blaring on highest levels for a while about this.


Deep learning has already proven its worth. Google translate is an example on the older side. As LLMs go, I can take a picture of a tree or insect and upload it and have an LLM identify it in seconds. I can paste a function that doesn't work into an LLM and it will usually identify the problems. These are truly remarkable steps forward.

How can I account for the cynicism that's so common on HN? It's got to be a psychological mechanism.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: