Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I agree with you about AI ethicists (and in general someone whose job is only ethics is usually a grifter) but OpenAI’s safety team was a red team (at least a few months ago), testing its ability to escape boxes by giving it almost the power to. They were the guys who had the famous “watch the AI lie to the Upworker I hired so he’ll do the work” guys.

So the structure matters. Ethicists who produce papers on why ethics matters and the like are kind of like security, compliance, and legal people at your company who can only say no to your feature.

But Google’s Project Zero team is a capable team and produces output that actually help Google and everyone. In a particularly moribund organization, they really stand out.

I think the model is sound. If your safety, security, compliance, and legal teams believe that the only acceptable risk is from a mud ball buried in the ground then you don’t have any of those functions because that’s doable by an EA with an autoresponder. What this effective team does is minimize your risks on this front while allowing you to build your objective.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: