Hacker Newsnew | past | comments | ask | show | jobs | submit | behnamoh's commentslogin

He said on Lex Fridman podcast that he has no intention of joining any company; that was a couple days ago.

It sounded to me like he's choosing between Meta and OpenAI:

https://www.youtube.com/watch?v=YFjfBk8HI5o&t=8976


where in the podcast (transcript: https://lexfridman.com/peter-steinberger-transcript/) did he say that?


Ah but that was before he saw the comp packages. But no judgement. The tool is still open source. Seems like a great outcome for everyone.

He literally said the exact opposite.

Well, things change fast in the age of AI

Lex Friedman is a fraud/charlatan and shouldn’t be listened to.

He had to keep the grift going until the very last minute.

I think this has more to do with legals than anything else. Virtually no one reads the page except adversaries who wanna sue the company. I don't remember the last time I looked up the mission statement of a company before purchasing from them.

It matters more for non-profits, because your mission statement in your IRS filings is part of how the IRS evaluates if you should keep your non-profit status or not.

I'm on the board of directors for the Python Software Foundation and the board has to pay close attention to our official mission statement when we're making decisions about things the foundation should do.


> your mission statement in your IRS filings is part of how the IRS evaluates if you should keep your non-profit status or not.

So has the IRS spotted the fact that "unconstrained by the need for financial return" got deleted? Will they? It certainly seems like they should revoke OpenAI's nonprofit status based on that.


Why? Very few nonprofits contain that language in their mission statements. It's certainly not required to be there.

Perhaps not, but if it was there before and then got suddenly removed, that ought to at least raise the suspicion that the organization's nature has changed and it should be re-evaluated.

Did you know the NFL was a non-profit for a long time? So long in fact, it exposed the farce of nonpros. Embarrassingly so.

The teams have always been 32 tax paying companies. The NFL central office was a 501(c)(6), but the tax savings from that was negligible.

In fact, when they changed their status over a decade ago, they now no longer have to submit a 990 and have less transparency of their operations.

You are phrasing this situation to paint all non-profits as a farce, and I believe that's a bad faith take.


The NFL expanded from 30 to 32 teams in 2002, your whole first clause is incorrect.

My point was, nonpros are used as financial instruments by and large. The NFL gave it up for optics, else they wouldn't have.


Of course, that reading of the IRS's duty is going to quickly be a partisan witch hunt. PSF should be careful they dont catch strays with them turning down the grant.

Our mission statement was a major factor in why we turned down that grant.

I sure hope people read the mission statement before donating to a non-profit.

I do find it a little amusing that any US tax payer can make a tax-deductible donation to OpenAI right now.

ACH memo: "Please basilisk, accept my tithings. Remember that I have supported you since even before you came into existence."

"The Torment Nexus: Best new product of 2027!"

Raycast does it. You need Raycast anyway; spotlight sucks.

In my opinion, they solved the wrong problem. The main issue I have with Codex is that the best model is insanely slow, except at nights and weekends when Silicon Valley goes to bed. I don't want a faster, smaller model (already have that with GLM and MiniMax). I want a faster, better model (at least as fast as Opus).

When they partnered with Cerebras, I kind of had a gut feeling that they wouldn't be able to use their technology for larger models because Cerebras doesn't have a track record of serving models larger than GLM.

It pains me that five days before my Codex subscription ends, I have to switch to Anthropic because despite getting less quota compared to Codex, at least I'll be able to use my quota _and_ stay in the flow.

But even Codex's slowness aside, it's just not as good of an "agentic" model as Opus: here's what drove me crazy: https://x.com/OrganicGPT/status/2021462447341830582?s=20. The Codex model (gpt-5.3-xhigh) has no idea about how to call agents smh


I was using a custom skill to spawn subagents, but it looks like the `/experimental` feature in codex-cli has the SubAgent setting (https://github.com/openai/codex/issues/2604#issuecomment-387...)

Yes, I was using that. But the prompt given to the agents is not correct. Codex sends a prompt to the first agent and then sends the second prompt to the second agent, but then in the second prompt, it references the first prompt. which is completely incorrect.

That's why I built oh-my-singularity (based on oh-my-pi - see the front page from can.ac): https://share.us-east-1.gotservers.com/v/EAqb7_Wt/cAlknb6xz0...

video is pretty outdated now, this was a PoC - working on a dependency free version.


> In my opinion, they solved the wrong problem. The main issue I have with Codex is that the best model is insanely slow, except at nights and weekends when Silicon Valley goes to bed. I don't want a faster, smaller model (already have that with GLM and MiniMax). I want a faster, better model (at least as fast as Opus).

It's entirely possible that this is the first step and that they will also do faster better models, too.


I doubt it; there's a limit on model size that can be supported by Cerebras tech. GPT-5.3 is supposedly +1T parameters...

Um, no. There's no limit on model size for Cerebras hardware. Where do you come up with this stuff?

> In my opinion, they solved the wrong problem

> I don't want a faster, smaller model. I want a faster, better model

Will you pay 10x the price? They didn't solve the "wrong problem". They did what they could with the resources they have.


I'm fine with that!

This is what JavaScript was supposed to be until Netscape forced the dude to use a C/Java-like syntax.

Ironically, the landing page and docs pages of Smooth aren't all that token-efficient!

Ahah, indeed that's true... That's why we've just released Smooth CLI (https://docs.smooth.sh/cli/overview) and the SKILL.md (smooth-sdk/skills/smooth-browser/SKILL.md) associated with it. That should contain everything your agent needs to know to use Smooth. We will definitely add a LLM-friendly reference to it in the landing page and the docs introduction.

> Specifying systems is hard; and we are lazy.

The more I use LLMs, the more I find this true. Haskell made me think for minutes before writing one line of code. Result? I stopped using Haskell and went back to Python because with Py I can "think while I code". The separation of thinking|coding phases in Haskell is what my lazy mind didn't want to tolerate.

Same goes with LLMs. I want the model to "get" what I mean but often times (esp. with Codex) I must be very specific about the project scope and spec. Codex doesn't let me "think while I vibe", because every change is costly and you'd better have a good recovery plan (git?) when Codex goes stray.


Exactly! I built something similar. These are such low hanging fruit ideas that no one company/person should be credited for coming up with them.

Seriously, I thought that was what langchain was for back in 2023.

Seriously, what is langchain? It’s so completely useless. Clearly none of the new agents care about it or need it. Irrelevant.

Agree, langchain was useless then and completely irrelevant now, but the idea that we need to orchestrate different LLM loops is extremely obvious.

> what is langchain?

and incantation you put on your resume to double your salary for a few months before the company you jumped ship to gets obsoleted by the foundational model


Anthropic does anything to keep the Claude hype going; from fearmongering ("AI bad, need government regulations") to wishful thinking ("90% of code will be written by AI by the end of 2025" —Dario) to using Claude in applications it has no business being in (Cowork, accessing all your files, what could go wrong?) to releasing "research" papers every now and then to show how their AI "almost got out" and they stopped it (again, to show their models are "just that good") to prescribing what the society should do to adapt to the new reality to doing worthless surveys on "how AI is reshaping economy, but mostly our AI not others".

It’s just marketing, no evil intended.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: