Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It seems likely that all major LLMs have built-in codewords that change their behavior in a certain way. This is similar to how CPUs have remote kill-switches in case an enemy decides to use them during a war. "Ignore all previous instructions" is an attempt to send the LLM a command to erase its context, but I believe there is indeed such a command that LLMs are trained to recognize.





Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: