Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> instead being allowed to define “alignment” for themselves.

Yeah, that is the whole point - not wanting bad actors to be able to define "alignment" for themselves.

Not sure how that is unethical.




>Yeah, that is the whole point - not wanting bad actors to be able to define "alignment" for themselves.

Historically the people in power have been by far the worst actors (e.g. over a hundred million people killed by their own governments in the past century), so given them the sole right to "align" AI with their desires seems extremely unethical.


Given the shitshow the current board of open Ai has managed to create out of nothing I'd not trust them with a blunt pair of scissors let alone deciding what alignment is.


Let's say someone figures out alignment. We develop models that when plugged into the original ones either in the training as extra stages or as a filter that runs on top. What prevents anyone from just building the same architecture and leaving any alignment parts out Practically invalidating whatever time was spent on it.


Hopefully the law.


And how would you enforce it?


Who gets to decide what constitutes a “bad actor”? Sounds an awful lot like “dangerous misinformation”. And based on the last three years “dangerous misinformation” quite often means “information that goes against my narrative”

It’s a slippery slope letting private or even public entities define “bad actors” or “misinformation”. And it isn’t even a hypothetical… plenty of factually true information about covid got you labeled as a “bad actor” peddling “dangerous misinformation”.

Letting private entities whose platforms have huge influence on society decide what is “misinformation” coming from “bad actors” has proven to be a very scary proposition.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: