I think this has more to do with legals than anything else. Virtually no one reads the page except adversaries who wanna sue the company. I don't remember the last time I looked up the mission statement of a company before purchasing from them.
It matters more for non-profits, because your mission statement in your IRS filings is part of how the IRS evaluates if you should keep your non-profit status or not.
I'm on the board of directors for the Python Software Foundation and the board has to pay close attention to our official mission statement when we're making decisions about things the foundation should do.
> your mission statement in your IRS filings is part of how the IRS evaluates if you should keep your non-profit status or not.
So has the IRS spotted the fact that "unconstrained by the need for financial return" got deleted? Will they? It certainly seems like they should revoke OpenAI's nonprofit status based on that.
Perhaps not, but if it was there before and then got suddenly removed, that ought to at least raise the suspicion that the organization's nature has changed and it should be re-evaluated.
Of course, that reading of the IRS's duty is going to quickly be a partisan witch hunt. PSF should be careful they dont catch strays with them turning down the grant.
In my opinion, they solved the wrong problem. The main issue I have with Codex is that the best model is insanely slow, except at nights and weekends when Silicon Valley goes to bed. I don't want a faster, smaller model (already have that with GLM and MiniMax). I want a faster, better model (at least as fast as Opus).
When they partnered with Cerebras, I kind of had a gut feeling that they wouldn't be able to use their technology for larger models because Cerebras doesn't have a track record of serving models larger than GLM.
It pains me that five days before my Codex subscription ends, I have to switch to Anthropic because despite getting less quota compared to Codex, at least I'll be able to use my quota _and_ stay in the flow.
But even Codex's slowness aside, it's just not as good of an "agentic" model as Opus: here's what drove me crazy: https://x.com/OrganicGPT/status/2021462447341830582?s=20. The Codex model (gpt-5.3-xhigh) has no idea about how to call agents smh
Yes, I was using that. But the prompt given to the agents is not correct. Codex sends a prompt to the first agent and then sends the second prompt to the second agent, but then in the second prompt, it references the first prompt. which is completely incorrect.
> In my opinion, they solved the wrong problem. The main issue I have with Codex is that the best model is insanely slow, except at nights and weekends when Silicon Valley goes to bed. I don't want a faster, smaller model (already have that with GLM and MiniMax). I want a faster, better model (at least as fast as Opus).
It's entirely possible that this is the first step and that they will also do faster better models, too.
Ahah, indeed that's true... That's why we've just released Smooth CLI (https://docs.smooth.sh/cli/overview) and the SKILL.md (smooth-sdk/skills/smooth-browser/SKILL.md) associated with it. That should contain everything your agent needs to know to use Smooth. We will definitely add a LLM-friendly reference to it in the landing page and the docs introduction.
The more I use LLMs, the more I find this true. Haskell made me think for minutes before writing one line of code. Result? I stopped using Haskell and went back to Python because with Py I can "think while I code". The separation of thinking|coding phases in Haskell is what my lazy mind didn't want to tolerate.
Same goes with LLMs. I want the model to "get" what I mean but often times (esp. with Codex) I must be very specific about the project scope and spec. Codex doesn't let me "think while I vibe", because every change is costly and you'd better have a good recovery plan (git?) when Codex goes stray.
and incantation you put on your resume to double your salary for a few months before the company you jumped ship to gets obsoleted by the foundational model
Anthropic does anything to keep the Claude hype going; from fearmongering ("AI bad, need government regulations") to wishful thinking ("90% of code will be written by AI by the end of 2025" —Dario) to using Claude in applications it has no business being in (Cowork, accessing all your files, what could go wrong?) to releasing "research" papers every now and then to show how their AI "almost got out" and they stopped it (again, to show their models are "just that good") to prescribing what the society should do to adapt to the new reality to doing worthless surveys on "how AI is reshaping economy, but mostly our AI not others".
reply