Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
OpenAI Scraps Team That Researched Risk of 'Rogue' AI (businessinsider.com)
25 points by aleph_minus_one on May 18, 2024 | hide | past | favorite | 10 comments




Probably because it was never a real thing.

This whole "AI Safety" just screams scaremongering for people who can barely use email.

A way to generate hype, and more maliciously keep the incumbents from losing their positioning by pushing needless regulations.


An excellent X thread from Yann LeCun saying similar:

https://x.com/ylecun/status/1791890883425570823


I think that someone or something to be actually super intelligent like humans it needs to be conscious first, and the whole underlying foundation of software AI is wrong or not suitable to scale to the point of super intelligence. You wont see AI evolution like we saw life evolution here on Earth.

Humans are bad at multitasking but computers excel at it, that's why machine/deep learning is so impressive. Computers thrive on parallel computing while humans like I said suck at it and therefore Yann is right; LLMs are information retrieval super beasts who are programmed to predict and not to make independent decisions and tinker about anything in particular.


The example he uses seems very apples and oranges to me. I remember a scene from Age of Ultron where 30 seconds after Ultron becomes aware he reasons that humanity must be destroyed. It's just a movie but that is more closely aligned to the reality people are worried about.


AI would eradicate humans for what? Just for lolz or it has any particular long term plan in mind? I would rather say AI could screw up while trying to help humans, we already saw that when badly designed or implemented AI algorithms screw up.


More likely some dimwit will copy paste hallucinated code that does something horrible lol let's be honest.


So you say Jan Leike is scaremongering?

https://news.ycombinator.com/item?id=40391412


AI safety is of existential importance to the human race. AI safety is used by market incumbents to create an artificial moat. Both things can be true.


The irony is that they were scrapped after unsuccessfully trying and failing to align OpenAI by forcing out Sam Altman.

Sam Altman completely outsmarted and outmaneuvered them.

If they lost so badly to Sam Altman, they are deluded if they think they can align a super-human intelligence.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: