Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's a pretty sucky solution to that problem imo, and I can see a substantial risk that it causes people to withdraw even more from real relations.



One concern that I do worry about is if LLMs are able to present an false attractive view of the world that the user will become increasingly dependent on the LLMs to maintain that view. A cult of 1. Reminds me of the episode 'Safe Space' from South Park but instead of Butters filtering content it'll be the LLM. People are already divorced enough from reality - but I see no reason why they couldn't be more divorced, at least temporarily.


It begs the question of who decides what “reality” is though. A lot of people have an unrealistically negative view of themselves and their abilities—often based on spending time around pessimistic or small-minded humans.

In that case, if an AI increases someone’s confidence in themselves, you could say it’s giving them a stronger sense of reality by helping them to question distorted and self-limiting beliefs.


Reality as in the real world, it is what it is, no one decides.


We're talking about psychology, therapy, sycophancy, etc. None of this is empirical.

If someone thinks they can, say, create a billion dollar startup, whether they can really do it or not is a subjective determination. The AI might tell the person they can do it. You might tell them they can't, that the AI is sycophantic, and that they should stop talking to it because they're losing touch with reality.

But is the AI a sycophant, or are you an irrational pessimist?


The AI will be saying the same thing to everyone. Rationally, what are the chances every single OpenAI customer will be building a billion dollar startup any time soon?

But even it's more obvious than that. The sycophancy is plain old love bombing, which is a standard cult programming technique.

As for startups - let's wait until the AI has built a few of its own, or at least mentored humans successfully.


That's easy. What makes someone a sycophant, by definition, is that their encouragement and flattery is unconditional and completely disconnected from any sort of realistic consideration of your ideas.

You can't judge whether LLM is acting like a sycophant without reading the conversation, and you can't judge whether a human is being an irrational pessimist without having the full context.

Are they a highly intelligent, technically skilled, and socially competent person (probably not if they discuss their ideas with ChatGPT instead of a friend), or do they have a high school diploma, zero practical skills, and have spent the past 20 years smoking weed all day?


That depends on whether they are capable of creating a billion dollar startup.

If they aren’t, and I say they aren’t, then I am correct. If they are, and the AI’s output says they are, then AI’s output is correct.


I think it is more complicated than just a matter of being correct or not. Common advice in some creative professions is "don't bother trying to do X for a living, you'll never make it." The point of the advice is not whether or not it is literally correct and the person is better off not bothering - in the general case, it's probably true. The point is that it acts as a filter for those not motivated enough. It's a backhanded sort of test implicitly presented to the aspirant.

Someone who really, really wants to make a billion dollar startup against all odds is going to ignore your advice anyway. In fact, they would ignore any AI's advice on the topic as well. But that kind of arrogance is precisely what's required to be able to pull it off. Someone who quits the moment an AI tells them "don't do it" was not cut out to accomplish such a goal to begin with.

And maybe in the end the startup will only be worth a couple million dollars, but the hubris to believe they could go even further would be what got them that far at all. So "can build a billion dollar startup" ended up being false, but something else was gained in the end.


We get into a bit of a weird space though when they know your opinions about them. I'm sure there are quite a few people who can only build a billion dollar startup if someone emotionally supports them in that endeavor. I'm sure more people could build such a startup if those around them provide knowledge or financial support. In the limit, pretty much anyone can build a billion dollar startup if handed a billion dollars. Are these people capable or not capable of building a building a billion dollar startup.

EDIT: To be clear, I somehow doubt an LLM would be able to provide the level of support needed in most scenarios. However, you and others around the potential founder might make the difference. Since your assessment of the person likely influences the level of support you provide to them, your assessment can affect the chances of whether or not they successfully build a billion dollar startup.


Hopefully there are better solutions to the fundamental limitations of societal empathy in the future, but for now i just can't see any

Seems to me empathy on a societal scale has been receding as population grows, not increasing to match (or outpace)

Telling people to seek empathy elsewhere to me will be about as useful as telling people at an oasis in the desert to look for water elsewhere, but i hope i'm wrong




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: