Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's the continued alignment with fine-tuning that's degrading its responses.

You can apparently have it be nice or smart, but not both.



Curious as to whether theres a more general rule at play there about filtering interfering with getting good answers. If there is that's a scary thought from an ethics perspective.


Why would someone care if its nice or not? It's an algorithm. You're using it to get output, not to get some psychology help.


There was a guy in the news who asked an AI to tell him it was a good idea to commit suicide, then he killed himself.

Even on this forum I've seen AI enthusiasts claiming AI will be the best psychologist, best school teacher, etc.


That was Eliza, an AI so old that it's included in stock Emacs, not an LLM. It's propaganda, not news.


The chatbot app was called "Eliza" but it's not the Eliza you are thinking of.

https://news.ycombinator.com/item?id=35402777

https://www.businessinsider.com/widow-accuses-ai-chatbot-rea...


OpenAI presumably cares about being sued if it provides the illegal content they trained it on.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: