Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It used to respect custom instructions soon after GPT4 came out. I have instruction that it should always include [reasoning] part which is meant not to be read by the user. It improved quality of the output and gave some additional interesting information. It never does it know even though I never changed my custom instructions. It even faded away slowly along the updates.

In general I would be much more happy user if it haven't been working so well at one point before they heavily nerfed it. It used to be possible ta have a meaningful conversation on some topic. Now it's just super eloquent GPT2.




Yeah I have a line in my custom prompt telling it to give me citations. When custom prompts first came out, it would always give me information about where to look for more, but eventually it just… didn’t anymore.

I did find recently that it helps if you put this sentence in the “What would you like ChatGPT to know about you” section:

> I require sources and suggestions for further reading on anything that is not code. If I can't validate it myself, I need to know why I can trust the information.

Adding that to the bottom of the “about you” section seems to help more than adding something similar to the “how would you like ChatGPT to respond”.


That's funny, I used the same trick of making it output an inner monologue. I also noticed that the custom instructions are not being followed anymore. Maybe the RLHF tuning has gotten to the point where it wants to be in "chatty chatbot" mode regardless of input?


I would be much more happy user if it haven't been working so well at one point before they heavily nerfed it.

... and this is why we https://reddit.com/r/localllama




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: