Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> it is plausible that ChatGPT can get to a state where it can act as a good therapist

Be careful with that thought, it's a trap people have been falling into since the sixties:

https://en.wikipedia.org/wiki/ELIZA_effect




Eventual plausibility is a suitably weak assertion, to refute it you would have to at least suggest that it is never possible which you have not done.


I dunno, I feel like most people (probably not the typical HN user though) don't even think about their feelings, wants or anything else introspective on a regular basis. Maybe having something like ChatGPT available could be better than nothing, at least for people to start being at least a bit introspective, even if it's LLM-assisted. Maybe it gets a bit easier to ask questions that you feel are stigmatized, as you know (think) no other human will see it, just the robot that doesn't have feelings nor judge you.

I agree that it probably won't replace a proper therapist/psychologist, but maybe it could at least be a small step to open up and start thinking?


> I feel like most people (probably not the typical HN user though) don't even think about their feelings, wants or anything else introspective on a regular basis.

Well, two things.

First, no. People who engage on HN are a specific part of the population, with particular tendencies. But most of the people here are simply normal, so outside of the limits you consider. Most people with real social issues don’t engage in communities, virtual or otherwise. HN people are not special.

Then, you cannot follow this kind of reasoning when thinking about a whole population. Even if people on average tend to behave one way, this leaves millions of people who would behave otherwise. You simply cannot optimise for the average and ignore the worst case in situations like this, because even very unlikely situations are bound to happen a lot.

> Maybe having something like ChatGPT available could be better than nothing, at least for people to start being at least a bit introspective, even if it's LLM-assisted.

It is worse than nothing. A LLM does not understand the situation or what people say to it. It cannot choose to, say, nudge someone in a specific direction, or imagine a way to make things better for someone.

À LLM regresses towards the mean of its training set. For people who are already outside the main mode of the distribution, this is completely unhelpful, and potentially actively harmful. By design, a LLM won’t follow a path that was not beaten in its training data. Most of them are actually biased to make their user happy and validate what we tell them rather than get off that path. It just does not work.

> I agree that it probably won't replace a proper therapist/psychologist, but maybe it could at least be a small step to open up and start thinking?

In my experience, not any more than reading a book would. Future AI models might get there, I don’t think their incompetence is a law of nature. But current LLM are particularly harmful for people who are in a dicey psychological situation already.


> It is worse than nothing. A LLM does not understand the situation or what people say to it. It cannot choose to, say, nudge someone in a specific direction, or imagine a way to make things better for someone.

Right, no matter if this is true or not, if the choice is between "Talk to no one, bottle up your feelings" and "Talk to an LLM that doesn't nudge you in a specific direction", I still feel like the better option would be the latter, not the former, considering that it can be a first step, not a 100% health care solution to a complicated psychological problem.

> In my experience, not any more than reading a book would.

But to even get out in the world to buy a book (literally or figuratively) about something that acknowledges that you have a problem, can be (at least feel) a really big step that many are not ready to take. Contrast that to talking with a LLM that won't remember you nor judge you.

Edit:

> Most people with real social issues don’t engage in communities, virtual or otherwise.

Not sure why you're focusing on social issues, there are a bunch of things people deal with on a daily basis that they could feel much better about if they even spent the time to think about how they feel about it, instead of the typical reactionary response most people have. Probably every single human out there struggle with something, and are unable to open up about their problems with others. Even people like us who interact with communities online and offline.


I think people are getting hung up on comparisons to a human therapist. A better comparison imo is to journaling. It’s something with low cost and low stakes that you can do on your own to help get your thoughts straight.

The benefit from that perspective is not so much in receiving an “answer” or empathy, but in getting thoughts and feelings out of your own head so that you can reflect on them more objectively. The AI is useful here because it requires a lot less activation energy than actual journaling.


> Right, no matter if this is true or not, if the choice is between "Talk to no one, bottle up your feelings" and "Talk to an LLM that doesn't nudge you in a specific direction", I still feel like the better option would be the latter, not the former, considering that it can be a first step, not a 100% health care solution to a complicated psychological problem.

You’re right, I was not clear enough. What would be needed would be a nudge in the right direction. But the LLM is very likely to nudge in another because that’s what most people would need or do, just because that direction was the norm in its training data. It’s ok on average, but particularly harmful to people who are in a situation to have this kind of discussion with a LLM.

Look at the effect of toxic macho influencers for an example of what happens with harmful nudges. These people need help, or at least a role model, but a bad one does not help.

> But to even get out in the world to buy a book (literally or figuratively) about something that acknowledges that you have a problem, can be (at least feel) a really big step that many are not ready to take.

Indeed. It’s something that should be addressed in mainstream education and culture.

> Not sure why you're focusing on social issues,

It’s the crux. If you don’t have problems talking to people, you are much more likely to run into someone who will help you. Social issues are not necessarily the problem, but they are a hurdle in the path to find a solution, and often a limiting one. Besides, if you have friends to talk to and are able to get advice, then a LLM is even less theoretically useful.

> Probably every single human out there struggle with something, and are unable to open up about their problems with others. Even people like us who interact with communities online and offline.

Definitely. It’s not a problem for most people, who either can rationalise their problems themselves with time or with some help. It gets worse if they can’t for one reason or another, and it gets worse still if they are mislead intentionally or not. LLMs are no help here.


I think you're unreasonably pessimistic in the short term, and unreasonably optimistic in the long term.

People are getting benefit from these conversations. I know people who have uploaded chat exchanges and asked an LLM for help understanding patterns and subtext to get a better idea of what the other person is really saying - maybe more about what they're really like.

Human relationship problems tend to be quite generic and non-unique, so in fact the averageness of LLMs becomes more of a strength than a weakness. It's really very rare for people to have emotional or relationship issues that no one else has experienced before.

The problem is more that if this became common OpenAI could use the tool for mass behaviour modification and manipulation. ChatGPT could easily be given a subtle bias towards some belief system or ideology, and persuaded to subtly attack competing systems.

This could be too subtle to notice, while still having huge behavioural and psychological effects on entire demographics.

We have the media doing this already. Especially social media.

But LLMs can make it far more personal, which means conversations are far more likely to have an effect.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: