Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Because the Chief Scientist let ideology overrule pragmatism. There is always a tension between technical and commercial. That’s a battle that should be fought daily, but never completely won.

This looks like a terrible decision, but I suppose we must wait and see.



OpenAI is a non-profit research organisation.

It's for-profit (capped-profit) subsidiary exists solely to be able to enable competitive compensation to its researchers to ensure they don't have to worry about the opportunity costs of working at a non-profit.

They have a mutually beneficial relationship with a deep-pocketed partner who can perpetually fund their research in exchange for exclusive rights to commercialize any ground-breaking technology they develop and choose to allow to be commercialized.

Aggressive commercialization is at odds with their raison d'être and they have no need for it to fund their research. For as long as they continue to push forward the state of the art in AI and build ground-breaking technology they can let Microsoft worry about commercialization and product development.

If a CEO is not just distracting but actively hampering an organisation's ability to fulfill its mission then their dismissal is entirely warranted.


It seems Microsoft was totally blind-sided by this event. If true then Trillion$+ Microsoft will now be scruitinizing the unpredictability and organizational risk associated with being dependant on the "unknown-random" + powrerful + passionate Illya and board who are vehemently opposed to the trajectory lead by altman. One solution would be to fork OpenAI and its efforts, one side with the vision lead by Illya and the other Sam.


I don't think you know what intellectual property is.


It seems you have jumped to many conclusion's in your thinking process without any prompting in your inference. I would suggest lowering your temperature ;)


One doesn't simply 'fork' a business unless it has no/trivial IP, which OpenAI does not.


Forked:

https://twitter.com/satyanadella/status/1726509045803336122

"to lead a new advanced AI research team"

I would assume that Microsoft negotiated significant rights with regards to R&D and any IP.


I wouldn't call starting from zero forking


What is starting from zero exactly?


Even a non-profit needs to focus on profitability, otherwise it's not going to exist for very long. All 'non-profit' means is it's prohibited from distributing its profit to shareholders. Ownership of a non-profit doesn't pay you. The non-profit itself still wants and is trying to generate more then it spends.


I addressed that concern in my third paragraph.


>They have a mutually beneficial relationship with a deep-pocketed partner who can perpetually fund their research in exchange for exclusive rights to commercialize any ground-breaking technology they develop and choose to allow to be commercialized.

Isn't this already a conflict of interest, or a clash, with this:

>OpenAI is a non-profit research organisation.

?


> ?

"OpenAI is a non-profit artificial intelligence research company"

https://openai.com/blog/introducing-openai


Yeah! People forget who we're talking about here. They put TONS of research in at an early stage to ensure that illegal thoughts and images cannot be generated by their product. This prevented an entire wave of mental harms against billions of humans that would have been unleashed otherwise if an irresponsible company like Snap were the ones to introduce AI to the world.


As long as truly "open" AI wins, as in fully open-source AI, then I'm fine with such a "leadership transition."


this absolutely will not happen, Ilya is against it


Yeah if you think a misused AGI is like a misused nuclear weapon, you might think it’s a bad idea to share the recipe for either.


> This looks like a terrible decision

What did Sam Altman personally do that made firing him such a terrible decision?

More to the point, what can't OpenAI do without Altman that they could do with him?


> What did Sam Altman personally do that made firing him such a terrible decision?

Possibly the board instructed "Do A" or "Don't do B" and he went ahead and did do B.


This is what it feels like -- board is filled with academics concerned about AI security.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: