Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In late 2021, Ed Zitron wrote (on Twitter) that the future of all work was "work from home" and that no one would ever work in an office again. I responded:

"In the past, most companies have had processes geared towards office work. Covid-19 has forced these companies to re-gear their processes to handle external workers. Now that the companies have invested in these changed processes, they are finding it easier to outsource work to Brazil or India. Here in New York City, I am seeing an uptick in outsourcing. The work that remains in the USA will likely continue to be office-based because the work that can be done 100% remotely will likely go over seas."

He responded:

"Pee pee poo poo aaaaaaaaaaa peeeeee peeeeee poop poop poop."

I don't know if he was taking drugs or what. I find his persona on Twitter to be baffling.



He was wryly communicating, "your argument was so stupid I don't even need to engage with it".

In my experience he has a horrible response to criticism. He's right on the AI stuff, but he responds to both legitimate and illegitimate feedback without much thoughtfulness, usually non-sequitur redirect or ad hominem.

In his defense though, I expect 97% of feedback he gets is Sam Altman glazers, and he must be tired.


He's right on the AI stuff? How do you figure that? As far as I can tell, OpenAI is still operating. It sounds like you agree with him on the AI stuff, but he could be wrong, just like how he was wrong about remote work.

I'm actually more inclined to believe he's wrong if he gets so defensive about criticism. That tells me he's more focused on protecting his ego than actually uncovering the truth.


The fact that OpenAI is still operating and the argument that it is completely unsustainable are not two incompatible things.


Wether or not OpenAI is sustainable or not is only a question that can be answered in hindsight. If OpenAI is still around in 10 years, in the same sort of capacity, does OP become retroactively wrong?

My point is, you can agree that OpenAI is unsustainable, but it's not clear to me that is a decided fact, rather than an open conjecture. And if someone is making that decision from a place of ego, I have greater reason to believe that they didn't reason themselves into that position.


The fact they are not currently even close to profitable with ever increasing costs and the sobering scaling realities there is something you could consider, and if you do believe they are sustainable, then you would have to believe (in my opinion, unlikely scenarios) they will somehow become sustainable, which is also a conjecture.

Seems a little unreasonable to point out “they are still around” as a refutation of the claim they aren’t sustainable when, in fact, the moment the investment money faucet keeping them alive is turned off they collapse and very quickly.


No, it's a question answerable now. If you're losing twice as much money as you're making, the end of your company is an inescapable fact unless you turn that trend around.

What Zitron points out, correctly, is that there currently exists no narrative beyond wishful thinking which explains how that reversal will manifest.


I don't think he's right about everything. He is particularly weak at understanding underlying technology, as others have pointed out. But, perhaps by luck, he is right most of the time.

For example, he was the lone voice saying that despite all the posturing and media manipulation by Altman, that OpenAI's for-profit transformation would not work out, and certainly not by EOY2025. He was also the lone voice saying that "productivity gains from AI" were not clearly attributable to such, and are likely make-believe. He was right on both.

Perhaps you have forgotten these claims, or the claims about OpenAI's revenue from "agents" this year, or that they were going to raise ChatGPT's price to $44 per month. Altman and the world have seemingly memory-holed these claims and moved on to even more fantastical ones.

He has never said that OpenAI would be bankrupt, his position (https://www.wheresyoured.at/to-serve-altman/, Jul 2024) is:

I am hypothesizing that for OpenAI to survive for longer than two years, it will have to (in no particular order):

- Successfully navigate a convoluted and onerous relationship with Microsoft, one that exists both as a lifeline and a direct source of competition.

- Raise more money than any startup has ever raised in history, and continue to do so at a pace totally unseen in the history of financing.

- Have a significant technological breakthrough such that it reduces the costs of building and operating GPT — or whatever model that succeeds it — by a factor of thousands of percent.

- Have such a significant technological breakthrough that GPT is able to take on entirely unseen new use cases, ones that are not currently possible or hypothesized as possible by any artificial intelligence researchers.

- Have these use cases be ones that are capable of both creating new jobs and entirely automating existing ones in such a way that it will validate the massive capital expenditures and infrastructural investment necessary to continue.

I ultimately believe that OpenAI in its current form is untenable. There is no path to profitability, the burn rate is too high, and generative AI as a technology requires too much energy for the power grid to sustain it, and training these models is equally untenable, both as a result of ongoing legal issues (as a result of theft) and the amount of training data necessary to develop them.

He is right about this too. They are doing #2 on this list.


Is he right on the AI stuff? Like, on the OpenAI company stuff he could be? I don't know? But on the technology? He really doesn't seem to know what he's talking about.


> But on the technology? He really doesn't seem to know what he's talking about.

That puts him roughly on-par with everyone who isn't Gerganov or Karpathy.


I generally don't agree with him on much; it's just nobody really talks about how much money those companies burn, and are expected to burn, in bigger perspective.

For me 10 billion, 100 billion and 1 trillion are all very abstract numbers - until you show much unreal 1 trillion is.


It helps if you divide by the world population. Say ~10bn for this purpose so that's $1, $10 or $100 per head roughly.


> "Pee pee poo poo aaaaaaaaaaa peeeeee peeeeee poop poop poop."

Attach your name to this publicly, and you're a clown. I don't know why the world started listening to clowns and taking them seriously, when their personas are crafted to be non-serious on purpose.

Like I said, clowns.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: