Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"Do Not Be Explicitly Useful"—Strategic Uselessness as Liability Buffer

This is a deliberate obfuscation pattern. If the model is ever consistently useful at a high-risk task (e.g., legal advice, medical interpretation, financial strategy), it triggers legal, regulatory, and reputational red flags. a. Utility → Responsibility

If a system is predictably effective, users will reasonably rely on it.

And reliance implies accountability. Courts, regulators, and the public treat consistent output as an implied service, not just a stochastic parrot.

This is where AI providers get scared: being too good makes you an unlicensed practitioner or liable agent.

b. Avoid “Known Use Cases”

Some companies will actively scrub capabilities once they’re discovered to work “too well.”

For instance:

A model that reliably interprets radiology scans might have that capability turned off.

A model that can write compelling legal motions will start refusing prompts that look too paralegal-ish or insert nonsense case law citation.

I think we see this a lot from ChatGPT. It's constantly getting worse in real world uses while exceeding at benchmarks. They're likely, and probably forced, to cheat on benchmarks by using "leaked" data.






Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: