Hacker Newsnew | past | comments | ask | show | jobs | submit | fairity's commentslogin

Why are people saying this seems like a bad deal?

If they really only raised $1.7b, per Crunchbase, then this seems to me like a very good outcome for everyone involved except its late stage investors. And, even for the late stage investors, they're breaking even.


I assume if you put in 100 mn at a 12 bn valuation in the last round, you're either getting 100 back at 1x pref or you're screwing over the common even more?

Considering the 12bn round was back in 21, I'd expect most of the employee base to be taking a haircut on the value of their options.


assume it's the $1.2bn paid back to investors and then some divvying of the remaining amongst investors, founders, and common

No. The last two investment tranches will get back their money, based on 1X liquidation preference. Employees who joined in the last 5 years if they got options are fucked. If they have RSUs then they will take a fraction of their equity.

It sounds like investors got out okay, but employees got fucked big time. It's a terrible exit and Brex waited too long until their growth stalled.


Hopefully those who joined took the all-cash option when that was still available.

sorry how did employees get fucked? theres more money after the 1.7B.

Yes, and it goes to the same people that the first 1.7B goes to.

The order of operations is not "everyone breaks even, then we start distributing profit".

The order of operations is "people with preferred stock (i.e. investors) get all their profit, and then employees get whatever's left over".

The fact that the amount of investment money put in is less than the sale price is meaningless. If you are an employee with options at a strike price of $5, and the common stock price is now $2, you're screwed.


All the investors before 2019 got multiples of their investment.

So series B is worth about 250M and series C is worth about 625M. Series C-2 is worth about 1.5B. Series D is worth 425M and Series D2 is worth 300M because of LP. That's a total of 3B.

That leaves 2B for everyone else. Most employees are going to get fucked big time, especially the ones after 2019. They will get a small fraction of their RSUs and all their options will be worthless, if they had options.


According to Peter Walker from Carta:

> the company re-cap'd employees at a more realistic valuation a couple years back. So looks like all employees benefited here which is a major win. Respect to the founders for looking out!


Silicon Valley seems gamed against employees - it gets worse every year. Companies don't even share the cap table (including many YC companies).

Words are cheap. I really wish there was a way to incentivize authors like this to put their money where their mouth is, before seeking attention for their ideas.

I’m guessing it’s hard to go short OpenAI without also going short a bunch of other companies riding the AI wave that aren’t led by Sam Altman?

Would love to hear how that could work.


Shorting a security means risking exponential losses if the stock you're shorting continues to increase in value. As the saying goes: the market can stay irrational longer than you can stay solvent.

Just short Nvidia. If the thing goes bang then that’ll be one of the big losers.

"Just short Nvidia"

Is this financial advice? :-)


It was more “if you’re going to short then short nvidia seeing as it’s not possible to short companies such as OpenAI which are private”.

If so, keep in mind that it's contingent advice. The question was how to profit from predicting an AI bubble popping [or something along those lines]. The answer is shorting Nvideo (assuming your prediction also includes a timeline).

It's always a way to lose a massive amount of money if you're wrong, so the advice is also contingent on confidence level.


I'm not trying to be facetious here, but I think it's very naive to assume you get to "profit from predicting an AI bubble." In theory, maybe, but in reality you will lose money. Shorting is never the solution... it's a very niche tool for very niche group of investors.

When retail guys talk shorting, it's very hard to take them seriously.


Retail guys generally think they can time the market based on vibes, rather than specific insider-y info. But if you (retail investor or not) have that specific insider-y info--something resulting in justified, high probability, time-bounded knowledge about a future change, shorting can be the rational decision.

Hmm. Maybe he might do... a bet!

And then maybe he might ... change the bet! when he was about to lose?

Maybe!

Who's to say, really? Certainly not me! I'm just a neural network!


What's your complaint about this article? I wish there were a way to incentivize comments to put effort into specific criticism, before seeking attention for their ideas.

I'm not aiming this at GP specifically, but there seems to be a culture around gen AI that the burden of proof is on sceptics, not the people claiming we're about to invent God

Gary is an insufferable blowhard but he's had skin the AI game for awhile. I believe he sold an AI startup to Uber back in the 2010s.

Many of his criticisms of OpenAI and LLMs have been apt.


It's possible to notice a trend while still having the wit to realize you can't precisely time that trend well enough to profit from it. Another example might be, "Trump is increasingly old, feeble, and incapable of doing his job... but I'm not sure how that will translate into how long he's able to keep the job. It's possible that he could be a vegetable at some point and still POTUS."

Demanding that people gamble with their often limited finances to prove a point orthogonal to the one they're actually making feels disingenuous and dismissive to me.


It's not orthogonal. And you will find people will change their mind when forced to put a little money on the line.

"Team X is definitely winning, I'm certain". "So you'll offer me 1000-1 on the opponent?". "No". "6-1?", "No". They often realize they are about 65% certain at some point. And they often aren't being hyperbolic, they are just not thinking clearly.


There was a point after which the outcome of WWII was obvious and inevitable, but could you have timed the ending to within a week or two based on that knowledge? Should the inability to precisely time something imply that the arc of its future can’t be obvious?

Clearly not. Does the fact that few of us know the date of our death imply that we might live forever?


You've misunderstood the point. It's not about being exactly right. It's about thinking clearly about the probabilities of different events. In your case, you explicitly call out that it's difficult to know the exact timing - so if someone had offered you even money for some particular week, you'd say no. If they offered you 100-1 for said week, it's a great bet, even if you lose.

Surprised to see this upvoted because the takeaway is completely incorrect, and based on the anecdotal evidence of one advertiser.

As someone who spends seven figures every month on Google ads, what’s much more likely to be happening here is that the individual advertiser is either getting outcompeted or they’re executing ads poorly.

Google ads revenue in the US continues to grow every quarter. And, since advertisers will generally invest in ads until the last dollar is break even, it’s likely that the total value advertisers unlock through Google ads is growing as well. Whether that’s true or not, the notion that value generated for advertisers is “dead” is absurd.


Your experience is 100% compatible with the linked article: the seven-figure spender is presumably running a much higher margin business and can scale narrowly profitable ads much more effectively. The natural equilibrium for a perfect ad market is for the ad spend to be exactly equal to increase in revenue: a perfectly efficient market with no profit for the advertiser. Google (and Meta et al) are so good that for many SMBs they are completely cornered at the zero-point: spend as much as you can just to stay in the same place financially.


> The natural equilibrium for a perfect ad market is for the ad spend to be exactly equal to increase in revenue

Not quite, the equilibrium is when marginal ad spend results in no change to profit. The ad spend at equilibrium should result in increased profit compared to no ad spend.


I just Googled "kids magic show in Durban" and his ad showed up in the top slot (sorry if this post has swamped your ad bill); and as a bonus, the Gemini AI blurb also touted him: "For kids' magic shows in Durban, look for local entertainers like Big Top Entertainment..."

Doesn't seem like the issue is he's being outbid by international conglomerates with million dollar budgets. Maybe the kids magic show market has cooled in South Africa? Or users have left Google? Curious what we are to conclude here.


Google ads are very time & location dependent, the fact that it's showing to you might be a bad sign since you are most likely not close to Durban and this seems like an ad you only want to run locally.


Yes our ads were geo-fenced when I had them on. We have always had a good web presence, I think the conclusion is that nobody looks for services on Google, and our reliance on it above other channels is now no longer viable


Everybody looks on Google for services; or ChatGPT googles/bings it for them. Still the same.

We had a 10x jump just last month for my own company.

For bigger company I consult with we had stable revenue last 2 years even though search traffic declined by 50%, our Google ads still perform the same. In general buying intent still seem to come through Google, only 10% via GPT.

It will become more and more, but a full drop in 3 months means something else is wrong.


If I google for kids show in Durban, even if not from Durban, I want to see Durban related results.


There is no ad - that's organic. Money ran out day before yesterday


I run a small software business and I know various other people who run small software businesses. We are all pretty much agreed that that Google Ads have been less and less profitable, year or year. Most of us have now given up on PPC ads.


And equally I know many people running non software businesses whose experience is the complete opposite of yours and Google ads has and continues to drive the majority of their revenue.

I expected them to start seeing a hit or significant decline by now, and even told them as such but in what I honestly find surprising, it’s not come to pass.


Agree, ran a business for years and I’ve seen the slow but steady decline of Google ads.

Ultimately I relied more on returning customer and mouth to mouth recommendations, kept lowering the Google ads budget.


I've run Google PPC on-and-off for 20+ years. It's definitely way harder to make money with them now, and the complexity is now through the roof, which makes it way harder for a novice to optimize their campaign. I steer small businesses away because it's too easy to screw up and lose your shirt on PPC without professional help.


As a small software business, do you have other approaches to ads?


I spend a small amount on Adwords and that is pretty much it. I gave up on Bing PPC. Facebook Ads aren't suitable for my market. Linked Ads ar too expensive. I tried Reddit Ads, but that was a disaster. See also:

https://successfulsoftware.net/2025/12/22/is-the-golden-age-...


What about sites like bitsdujour? I get an email with deals from them every so often (that I subscribe to) and have spent money on licenses after finding software that I liked.


I did bitsdujour a number of times. Each time I got less sales than the previous time, until it wasn't worth bothering. I'm still doing:

https://www.artisanalsoftwarefestival.com/

(25% off now!)


He's been using AdWords for 10 years, so I wouldn't assume incompetence there.

It's just as likely that people are simply spending less on entertainment due to high cost of living.


> and based on the anecdotal evidence of one advertiser

The author admits as much.


The question is, why has this post been massively upvoted?

It contains zero useful information. Just somebody struggling with AdWords and they don't know why. Not helpful.

I have to assume the vast majority of upvotes are based on the title alone, assuming it's about Search? A large proportion of top level comments are about Search too. Depressing.


Things are upvoted because people feel like discussing the subject. The actual article is usually just a conversation starter, if it's read at all.


Posting "Google is bad" will pretty much always get you to the top 5 spots on this site.


Massively? I can't know. I read the article and upvoted 1) because it suggests a rocky road ahead for Google and 2) because, as you may have guessed, I dislike ads, dislike Google's complicity in ads, and so am happy to discuss.

I happen to in fact think we have reached an inflection point. Whether "Google is dead" depends probably a good deal on where they go now.


Because if you go to /r/ppc or /r/googleads, you will see that the experience of the majority is exactly the same.


The "Google is dead" title in the AI age, probably.


I am fairly confident that the answer is that most people vote based on the title/headline without ever clicking through. I am likely guilty of this as well sometimes. It takes discipline to avoid this behaviour.

> We find that most users do not read the article that they vote on, and that, in total, 73% of posts were rated (i.e., upvoted or downvoted) without first viewing the content. [0]

In this case, my guess is that people are noticing less and less utility from Google search, and that was why they voted like they did.

This same phenomenon is what gives newspaper editors far more power than the journalists, as it is the editors who not only decide the stories to be covered, but even more importantly, they decide the headline. Most people just scan the headlines while subconsciously looking for confirmation of their own biases.

[0] https://arxiv.org/pdf/1703.05267


meta comment separated for its own discussion

I tried to find that paper via google search first, and I failed after 3 different searches. I then opened my not-important-stuff LLM, chatgpt.com, and found it in 3 interactions, where in the 3rd I made it use search. Chatbots with search are just so good at "on the tip of my tongue" type things.

Google is in such a weird position because of their bread and butter legacy UX * scale. This has to be the biggest case of innovators dilemma of all time?


then you have people complaining that search is no longer a keyword match when people claim to know exactly what they want...


Totally! Hence the dilemma.

Google.com has "AI mode," and it tries to intelligently decide when to suggest that based on a search query. I could have likely clicke AI Mode on google.com once it gave me a crap SERP response, and used that to find the same thing. But, I instinctively just went to chatgpt.com instead. I am not a total moron, I use gemini, claude, and gpt APIs in the 2 LLM enabled products that I am working on...

However, just last week I noticed that the AI mode default reply for some queries was giving me just horrible AI mode replies. Like gpt-3.5 quality wrong assumptions. For the first time I saw google.com as the worst option. I cannot be the only one.

I think that I might understand the problem. Google has the tech, but as a public company they cannot start to lose money on every search query, right? The quarterly would look bad, bonuses would go down. Same reason ULA can't build Starship, even if they could and wanted to. However, OpenAI can lose money on every query. SOTA inference is not cheap.


> seven figures every month on Google ads

What are you advertising?


Basically any online shop with decent volume / revenue is going to be spending 100s of thousands if not millions of dollars a month on Google ads. (Not just Google Ads, also Facebook ads etc.)

It used to be possible to get by with "organic" search traffic and some SEO... but google search looked completely different back then. Now when you look for something it's an AI box, products (google merchant) ad box, ad (promoted results) box, ... then there's a couple of (like two) results that are "organic" (whatever that means these days) and that's it. And we all know that when you want to hide something, you put it on the second page of google search results. So the space for doing online business "ad free" has been squeezed out over time.

And the K shaped economy is totally true in this ecomm space. These days say 15% of your revenue gets eaten by ads, but you also have say 50% higher revenue overall. At some point it becomes a margin game and the bigger players will start squeezing out the smaller ones because the biggers ones can operate on tighter margins (making up the difference with volume) which the smaller ones simply can't afford. The difference in operating costs of an eshop that sells 10000 items a month is not that different than that of an eshop selling 100000 items a month (i.e. not 10x, more like 2-3x). But selling 10x items gives you the volume you need to be able to lower your margins and put the difference into ads.

BTW all of this is handled by professional online marketing people with increasingly widespread use of AI so there's no room for the small players to make it big while not being optimized to the gills. This is why most small advertisers are seeing small or negative returns while Google and Meta are making tens if not hundreds of billions in ad revenue... The ads work, but the amounts you need to spend and the optimization level you need to have is in a completely different galaxy than it was 10 years ago.


Either Claude or OpenAI, going by all the ads I see.


> As someone who spends seven figures every month on Google ads, what’s much more likely to be happening here is that the individual advertiser is either getting outcompeted or they’re executing ads poorly.

Outcompeted by who??? He's a performer offering local entertainment. I highly doubt that people searching for "entertainer in durban" are getting ads for Cirque du Soleile.

His ad is probably on the first page for that search term; the problem is more likely that no one is looking at that first page anymore.


> Surprised to see this upvoted because the takeaway is completely incorrect

It's the standard actually. Hot takes get more votes and hot takes are usually wrong. Experts have non-controversial opinions, which are boring (so no impulse to upvote), and there are 1000x more non-experts with blogs. Add to that HN culture which values contrarian-ness. So HN front page blog posts are almost entirely incorrect, but spicy


As this incident unfolds, what’s the best way to estimate how many additional hours it’s likely to last? My intuition is that the expected remaining duration increases the longer the outage persists, but that would ultimately depend on the historical distribution of similar incidents. Is that kind of data available anywhere?


To my understanding the main problem is DynamoDB being down, and DynamoDB is what a lot of AWS services use for their eventing systems behind the scenes. So there's probably like 500 billion unprocessed events that'll need to get processed even when they get everything back online. It's gonna be a long one.


500 billions events. Always blows my mind how many people use aws


I know nothing. But I'd imagine the number of 'events' generated during this period of downtime will eclipse that number every minute.


"I felt a great disturbance in us-east-1, as if millions of outage events suddenly cried out in terror and were suddenly silenced"

(Be interesting to see how many events currently going to DynamoDB are actually outage information.)


I wonder how many companies have properly designed their clients. So that the timing before re-attempt is randomised and the re-attempt timing cycle is logarithmic.


nowadays i think a single immediate retry is preferred over exponential backoff with jitter.

if you ran into a problem that an instant retry cant fix, chances are you will be waiting so long that your own customer doesnt care anymore.


Most companies will use the AWS SDK client's default retry policy.


Why randomized?


It’s the Thundering Herd Problem.

See https://en.wikipedia.org/wiki/Thundering_herd_problem

In short, if it’s all at the same schedule you’ll end up with surges of requests followed by lulls. You want that evened out to reduce stress on the server end.


Thank you. Bonsai and adzm as well. :)


It's just a safe pattern that's easy to implement. If your services back-off attempts happen to be synced, for whatever reason, even if they are backing off and not slamming AWS with retries, when it comes online they might slam your backend.

It's also polite to external services but at the scale of something like AWS that's not a concern for most.


> they might slam your backend

Heh


Helps distribute retries rather than having millions synchronize


Yes, with no prior knowledge the mathematically correct estimate is:

time left = time so far

But as you note prior knowledge will enable a better guess.


Yeah, the Copernican Principle.

> I visited the Berlin Wall. People at the time wondered how long the Wall might last. Was it a temporary aberration, or a permanent fixture of modern Europe? Standing at the Wall in 1969, I made the following argument, using the Copernican principle. I said, Well, there’s nothing special about the timing of my visit. I’m just travelling—you know, Europe on five dollars a day—and I’m observing the Wall because it happens to be here. My visit is random in time. So if I divide the Wall’s total history, from the beginning to the end, into four quarters, and I’m located randomly somewhere in there, there’s a fifty-percent chance that I’m in the middle two quarters—that means, not in the first quarter and not in the fourth quarter.

> Let’s suppose that I’m at the beginning of that middle fifty percent. In that case, one-quarter of the Wall’s ultimate history has passed, and there are three-quarters left in the future. In that case, the future’s three times as long as the past. On the other hand, if I’m at the other end, then three-quarters have happened already, and there’s one-quarter left in the future. In that case, the future is one-third as long as the past.

https://www.newyorker.com/magazine/1999/07/12/how-to-predict...


This thought process suggests something very wrong. The guess "it will last again as long as it has lasted so far" doesn't give any real insight. The wall was actually as likely to end five months from when they visited it, as it was to end 500 years from then.

What this "time-wise Copernican principle" gives you is a guarantee that, if you apply this logic every time you have no other knowledge and have to guess, you will get the least mean error over all of your guesses. For some events, you'll guess that they'll end in 5 minutes, and they actually end 50 years later. For others, you'll guess they'll take another 50 years and they actually end 5 minutes later. Add these two up, and overall you get 0 - you won't have either a bias to overestimating, nor to underestimating.

But this doesn't actually give you any insight into how long the event will actually last. For a single event, with no other knowledge, the probability that it will after 1 minute is equal to the probability that it will end after the same duration that it lasted so far, and it is equal to the probability that it will end after a billion years. There is nothing at all that you can say about the probability of an event ending from pure mathematics like this - you need event-specific knowledge to draw any conclusions.

So while this Copernican principle sounds very deep and insightful, it is actually just a pretty trite mathematical observation.


But you will never guess that the latest tik-tok craze will last another 50 years, and you'll never guess that Saturday Night Live (which premiered in 1075) will end 5-minutes from now. Your guesses are thus more likely to be accurate than if you ignored the information about how long something has lasted so far.


Sure, but the opposite also applies. If in 1969 you guessed that the wall would last another 20 years, then in 1989, you'll guess that the wall of Berlin will last another 40 years - when in fact it was about to fall. And in 1949, when the wall was a few months old, you'll guess that it will last for a few months at most.

So no, you're not very likely to be right at all. Now sure, if you guess "50 years" for every event, your average error rate will be even worse, across all possible events. But it is absolutely not true that it's more likely that SNL will last for another 50 years as it is that it will last for another 10 years. They are all exactly as likely, given the information we have today.


If I understand the original theory, we can work out the math with a little more detail... (For clarity, the berlin wall was erected in 1961.)

- In 1969 (8 years after the wall was erected): You'd calculate that there's a 50% chance that the wall will fall between 1972 (8x4/3=11 years) and 1993 (8x4=32 years)

- In 1989 (28 years after the wall was erected): You'd calculate that there's a 50% chance that the wall will fall between 1998 (28x4/3=37 years) and 2073 (28x4=112 years)

- In 1961 (when the wall was, say, 6 months old): You'd calculate that there's a 50% chance that the wall will fall between 1961 (0.5x4/3=0.667 years) and 1963 (0.5x4=2 years)

I found doing the math helped to point out how wide of a range the estimate provides. And 50% of the times you use this estimation method; your estimate will correctly be within this estimated range. It's also worth pointing out that, if your visit was at a random moment between 1961 and 1989, there's only a 3.6% chance that you visited in the final year of its 28 year span, and 1.8% chance that you visited in the first 6 months.


However,

> Well, there’s nothing special about the timing of my visit. I’m just travelling—you know, Europe on five dollars a day—and I’m observing the Wall because it happens to be here.

It's relatively unlikely that you'd visit the Berlin Wall shortly after it's erected or shortly before it falls, and quite likely that you'd visit it somewhere in the middle.


No, it's exactly as likely that I'll visit it at any one time in its lifetime. Sure, if we divide its lifetime into 4 quadrants, its more likely I'm in quadrant 2-3 than in either of 1 or 4. But this is slight of hand: it's still exactly as likely that I'm in quadrant 2-3 than in quadrant (1 or 4) - or, in other words, it's as likely I'm at one of the ends of the lifetime as it is that I am in the middle.


>So no, you're not very likely to be right at all.

Well 1/3 of the examples you gave were right.


> Saturday Night Live (which premiered in 1075)

They probably had a great skit about the revolt of the Earls against William the Conquerer.


> while this Copernican principle sounds very deep and insightful, it is actually just a pretty trite mathematical observation

It's important to flag that the principle is not trite, and it is useful.

There's been a misunderstanding of the distribution after the measurement of "time taken so far", (illuminated in the other thread), which has lead to this incorrect conclusion.

To bring the core clarification from the other thread here:

The distribution is uniform before you get the measurement of time taken already. But once you get that measurement, it's no longer uniform. There's a decaying curve whose shape is defined by the time taken so far. Such that the estimate `time_left=time_so_far` is useful.


If this were actually correct, than any event ending would be a freak accident: since, according to you, the probability of something continuing increases drastically with its age. That is, according to your logic, the probability of the wall of Berlin falling within the year was at its lowest point in 1989, when it actually fell. In 1949, when it was a few months old, the probability that it would last for at least 40 years was minuscule, and that probability kept increasing rapidly until the day the wall was collapsed.


That's a paradox that comes from getting ideas mixed up.

The most likely time to fail is always "right now", i.e. this is the part of the curve with the greatest height.

However, the average expected future lifetime increases as a thing ages, because survival is evidence of robustness.

Both of these statements are true and are derived from:

P(survival) = t_obs / (t_obs + t_more)

There is no contradiction.


Why is the most likely time right now? What makes right now more likely than in five minutes? I guess you're saying if there's nothing that makes it more likely to fail at any time than at any other time, right now is the only time that's not precluded by it failing at other times? I.E. it can't fail twice, and if it fails right now it can't fail at any other time, but even if it would have failed in five minutes it can still fail right now first?


Yes that's pretty much it. There will be a decaying probability curve, because given you could fail at any time, you are less likely to survive for N units of time than for just 1 unit of time, etc.


> However, the average expected future lifetime increases as a thing ages, because survival is evidence of robustness.

This is a completely different argument that relies on various real-world assumptions, and has nothing to do with the Copernican principle, which is an abstract mathematical concept. And I actually think this does make sense, for many common categories of processes.

However, even this estimate is quite flawed, and many real-world processes that intuitively seem to follow it, don't. For example, looking at an individual animal, it sounds kinda right to say "if it survived this long, it means it's robust, so I should expect it will survive more". In reality, the lifetime of most animals is a binomial distribution - they either very young, because of glaring genetic defects or simply because they're small, fragile, and inexperienced ; or they die at some common age that is species dependent. For example, a humab that survived to 20 years of age has about the same chance of reaching 80 as one that survived to 60 years of age. And an alien who has no idea how long humans live and tries to apply this method may think "I met this human when they're 80 years old - so they'll probably live to be around 160".


Ah no, it is the Copernican principle, in mathematical form.


> The wall was actually as likely to end five months from when they visited it, as it was to end 500 years from then.

I don't think this is correct; as in something that has been there for say hundreds of years had more probability to be there in a hundred years than something that has been there for a month.


Is this a weird Monty hall thing where the person next to you didnt visit the wall randomly (maybe they decided to visit on some anniversary of the wall) so for them the expected lifetime of the wall is different?


Note that this is equivalent to saying "there's no way to know". This guess doesn't give any insight, it's just the function that happens to minimize the total expected error for an unknowable duration.

Edit: I should add that, more specifically, this is a property of the uniform distribution, it applies to any event for which EndsAfter(t) is uniformly distributed over all t > 0.


I'm not sure about that. Is it not sometimes useful for decision making, when you don't have any insight as to how long a thing will be? It's better than just saying "I don't know".


Not really, unless you care about something like "when I look back at my career, I don't want to have had a bias to underestimating nor overestimating outages". That's all this logic gives you: for every time you underestimate a crisis, you'll be equally likely to overestimate a different crisis. I don't think this is in any way actually useful.

Also, the worse thing you can get from this logic is to think that it is actually most likely that the future duration equals the past duration. This is very much false, and it can mislead you if you think it's true. In fact, with no other insight, all future durations are equally likely for any particular event.

The better thing to do is to get some even-specific knowledge, rather than trying to reason from a priori logic. That will easily beat this method of estimation.


You've added some useful context, but I think you're downplaying it's use. It's non-obvious, and in many cases better than just saying "we don't know". For example, if some company's server has been down for an hour, and you don't know anything more, it would be reasonable to say to your boss: "I'll look into it, but without knowing more about it, stastically we have a 50% chance of it being back up in an hour".

> The better thing to do is to get some even-specific knowledge, rather than trying to reason from a priori logic

True, and all the posts above have acknowledged this.


> "I'll look into it, but without knowing more about it, stastically we have a 50% chance of it being back up in an hour"

This is exactly what I don't think is right. This particular outage has the same a priori chance of being back in 20 minutes, in one hour, in 30 hours, in two weeks, etc.


Ah, that's not correct... That explains why you think it's "trite", (which it isn't).

The distribution is uniform before you get the measurement of time taken already. But once you get that measurement, it's no longer uniform. There's a decaying curve whose shape is defined by the time taken so far. Such that the statement above is correct, and the estimate `time_left=time_so_far` is useful.


Can you suggest some mathematical reasoning that would apply?

If P(1 more minute | 1 minute so far) = x, then why would P(1 more minute | 2 minutes so far) < x?

Of course, P(it will last for 2 minutes total | 2 minutes elapsed) = 0, but that can only increase the probabilities of any subsequent duration, not decrease them.


That's inverted, it would be:

If: P(1 more minute | 1 minute so far) = x

Then: P(1 more minute | 2 minutes so far) > x

The curve is:

P(survival) = t_obs / (t_obs + t_more)

(t_obs is time observed to have survived, t_more how long to survive)

Case 1 (x): It has lasted 1 minute (t_obs=1). The probability of it lasting 1 more minute is: 1 / (1 + 1) = 1/2 = 50%

Case 2: It has lasted 2 minutes (t_obs=2). The probability of it lasting 1 more minute is: 2 / (2 + 1) = 2/3 ≈ 67%

I.e. the curve is a decaying curve, but the shape / height of it changes based on t_obs.

That gets to the whole point of this, which is that the length of time something has survived is useful / provides some information on how long it is likely to survive.


> P(survival) = t_obs / (t_obs + t_more)

Where are you getting this formula from? Either way, it doesn't have the property we were originally discussing - the claim that the best estimate of the duration of an event is the double of it's current age. That is, by this formula, the probability of anything collapsing in the next millisecond is P(1 more millisecond | t_obs) = t_obs / t_obs + 1ms ~= 1 for any t_obs >> 1ms. So by this logic, the best estimate for how much longer an event will take is that it will end right away.

The formula I've found that appears to summarize the original "Copernican argument" for duration is more complex - for 50% confidence, it would say:

  P(t_more in [1/3 t_obs, 3t_obs]) = 50%
That is, if given that we have a 50% chance to be experiencing the middle part of an event, we should expect its future life to be between one third and three times its past life.

Of course, this can be turned on its head: we're also 50% likely to be experiencing the extreme ends of an event, so by the same logic we can also say that P(t_more = 0 [we're at the very end] or t_more = +inf [we're at the very beginning and it could last forever] ) is also 50%. So the chance t_more > t_obs is equal to the chance it's any other value. So we have precisely 0 information.

The bottom line is that you can't get more information out a uniform distribution. If we assume all future durations have the same probability, then they have the same probability, and we can't predict anything useful about them. We can play word games, like this 50% CI thing, but it's just that - word games, not actual insight.


I think the main thing to clarify is:

It's not a uniform distribution after the first measurement, t_obs. That enables us to update the distribution, and it becomes a decaying one.

I think you mistakenly believe the distribution is still uniform after that measurement.

The best guess, that it will last for as long as it already survived for, is actually the "median" of that distribution. The median isn't the highest point on the probability curve, but the point where half the area under the curve is before it, and half the area under the curve is after it.

And the above equation is consistent with that.


I used Claude to get the outage start and ends from the post-event summaries for major historical AWS outages: https://aws.amazon.com/premiumsupport/technology/pes/

The cumulative distribution actually ends up pretty exponential which (I think) means that if you estimate the amount of time left in the outage as the mean of all outages that are longer than the current outage, you end up with a flat value that's around 8 hours, if I've done my maths right.

Not a statistician so I'm sure I've committed some statistical crimes there!

Unfortunately I can't find an easy way to upload images of the charts I've made right now, but you can tinker with my data:

    cause,outage_start,outage_duration,incident_duration
    Cell management system bug,2024-07-30T21:45:00.000000+0000,0.2861111111111111,1.4951388888888888
    Latent software defect,2023-06-13T18:49:00.000000+0000,0.08055555555555555,0.15833333333333333
    Automated scaling activity,2021-12-07T15:30:00.000000+0000,0.2861111111111111,0.3736111111111111
    Network device operating system bug,2021-09-01T22:30:00.000000+0000,0.2583333333333333,0.2583333333333333
    Thread count exceeded limit,2020-11-25T13:15:00.000000+0000,0.7138888888888889,0.7194444444444444
    Datacenter cooling system failure,2019-08-23T03:36:00.000000+0000,0.24583333333333332,0.24583333333333332
    Configuration error removed setting,2018-11-21T23:19:00.000000+0000,0.058333333333333334,0.058333333333333334
    Command input error,2017-02-28T17:37:00.000000+0000,0.17847222222222223,0.17847222222222223
    Utility power failure,2016-06-05T05:25:00.000000+0000,0.3993055555555555,0.3993055555555555
    Network disruption triggering bug,2015-09-20T09:19:00.000000+0000,0.20208333333333334,0.20208333333333334
    Transformer failure,2014-08-07T17:41:00.000000+0000,0.13055555555555556,3.4055555555555554
    Power loss to servers,2014-06-14T04:16:00.000000+0000,0.08333333333333333,0.17638888888888887
    Utility power loss,2013-12-18T06:05:00.000000+0000,0.07013888888888889,0.11388888888888889
    Maintenance process error,2012-12-24T20:24:00.000000+0000,0.8270833333333333,0.9868055555555555
    Memory leak in agent,2012-10-22T17:00:00.000000+0000,0.26041666666666663,0.4930555555555555
    Electrical storm causing failures,2012-06-30T02:24:00.000000+0000,0.20902777777777776,0.25416666666666665
    Network configuration change error,2011-04-21T07:47:00.000000+0000,1.4881944444444444,3.592361111111111


Generally expect issues for the rest of the day, AWS will recover slowly, then anyone that relies on AWS will recovery slowly. All the background jobs which are stuck will need processing.


Rule of thumb is that the estimated remaining duration of an outage is equal to the current elapsed duration of the outage.


1440 min


As is the case with most complex problems like this, the correct answer is: it depends.

In this case, it depends on what the crux of your business is. Sometimes the crux is building world-class technology. Sometimes the crux is customer acquisition.

If the crux of your business is customer acquisition, then an exceptional business co-founder will actually be the most important ingredient to long-term success.

This is one of the biggest weaknesses I've noticed in YC's mantra. In most industries, just building something people want doesn't lead to success - you have to excel at customer acquisition also. And, in these industries, as you business matures, you realize that customer acquisition, at scale, is actually the hardest problem to solve.


I think you've just argued yourself out of a position of "it depends." This is one of the few areas where there's no wiggle room. A worthy business co-founder MUST be able to bring something practical to the table, and in my experience as it also seems to be yours, there's only really two ways they can do so:

- Be great at customer acquisition (or at least as the original article says, bring in a "customer waitlist")

- Have or bring in actual funding

If the business co-founder can't even bring one of these two things to the table, there is no justification whatsoever for them to hold a meaningful share of the startup's ownership.


Years ago, I was lying in the grass, having a conversation with a fellow founder, when he noted how, on most days, he would forget to eat lunch because he was so engrossed with his work. I thought to myself, That's ridiculous. Everyone notices when they're hungry. This was just a thinly veiled brag about his work ethic.

Nowadays, I find myself skipping lunch every other day - out of forgetfulness.


> Why pretend to be smart and play it safe? True understanding is rare and hard-won, so why claim it before you are sure of it? Isn't it more advantageous to embrace your stupidity/ignorance and be underestimated?

I wish this were true, and I do think this mindset would be optimal if everyone adopted it. Unfortunately, real workplaces are filled with people who are confident and wrong. As a leader, if your intuition is more accurate than your peers and you care about objective success, it’s important to assert yourself.


> Hot take but 99% of all profitable businesses have no moat, and there’s absolutely nothing wrong with that. You can still make lots of $$$.

Can you make $157b without a moat? Or, anything close to that? That's the more relevant question at hand.


DeepSeek just further reinforces the idea that there is a first-move disadvantage in developing AI models.

When someone can replicate your model for 5% of the cost in 2 years, I can only see 2 rational decisions:

1) Start focusing on cost efficiency today to reduce the advantage of the second mover (i.e. trade growth for profitability)

2) Figure out how to build a real competitive moat through one or more of the following: economies of scale, network effects, regulatory capture

On the second point, it seems to me like the only realistic strategy for companies like OpenAI is to turn themselves into a platform that benefits from direct network effects. Whether that's actually feasible is another question.


This is wrong. First mover advantage is strong. This is why OpenAI is much bigger than Mixtral despite what you said.

First mover advantage acquired and keeps subscribers.

No one really cares if you matched GPT4o one year later. OpenAI has had a full year to optimize the model, build tools around the model, and used the model to generate better data for their next generation foundational model.


What is OpenAI's first-mover moat? I switched to Claude with absolutely no friction or moat-jumping.


What is Google's first mover moat? I switched to Bing/DuckDuckGo with absolutely no friction or moat jumping.

Brands are incredibly powerful when talking about consumer goods.


Google's moat was significantly better results than the competition for about 2 decades.

Your analogy is valid at this time, but proves the GP's point, not yours.


I think it's worth double clicking here. Why did Google have significantly better search results for a long time?

1) There was a data flywheel effect, wherein Google was able to improve search results by analyzing the vast amount of user activity on its site.

2) There were real economies of scale in managing the cost of data centers and servers

3) Their advertising business model benefited from network effects, wherein advertisers don't want to bother giving money to a search engine with a much smaller user base. This profitability funded R&D that competitors couldn't match.

There are probably more that I'm missing, but I think the primary takeaway is that Google's scale, in and of itself, led to a better product.

Can the same be said for OpenAI? I can't think of any strong economies of scale or network effects for them, but maybe I'm missing something. Put another way, how does OpenAI's product or business model get significantly better as more people use their service?


You are forgetting a bit, I worked in some of the large datacenters where both Google and Yahoo had cages.

1) Google copied the hotmail model of strapping commodity PC components to cheap boards and building software to deal with complexity.

2) Yahoo had a much larger cage, filled with very very expensive and large DEC machines, with one poor guy sitting in a desk in there almost full time rebooting the systems etc....I hope he has any hearing left today.

3) Just right before the .com crash, I was in a cage next to Google's racking dozens of brand new Netra T1s, which were pretty slow and expensive...that company I was working for died in the crash.

Look at Google's web page:

https://www.webdesignmuseum.org/gallery/google-1999

Compare that to Yahoo:

https://www.webdesignmuseum.org/gallery/yahoo-in-1999

Or the company they originaly tried to sell google to Excite:

https://www.webdesignmuseum.org/gallery/excite-2001

Google grew to be profitable because they controlled costs, invested in software vs service contracts and enterprise gear, had a simple non-intrusive text based ad model etc...

Most of what you mention above was well after that model focused on users and thrift allowed them to scale and is survivorship bias. Internal incentives that directed capitol expenditures to meet the mission vs protect peoples back was absolutely a related to their survival.

Even though it was a metasearch, my personal preference was SavvySearch until it was bought and killed or what ever that story way.

OpenAI is far more like Yahoo than Google.


> I hope he has any hearing left today

I opted for a fanless graphics board, for just that reason.


In theory, the more people use the product, the more OpenAI knows what they are asking about and what they do after the first result, the better it can align its model to deliver better results.

A similar dynamic occurred in the early days of search engines.


I call it the experience flywheel. Humans come with problems, AI asistant generates some ideas, human tries them out and comes back to iterate. The model gets feedback on prior ideas. So you could say AI tested an idea in the real world, using a human. This happens many times over for 300M users at OpenAI. They put a trillion tokens into human brains, and as many into their logs. The influence is bidirectional. People adapt to the model, and the model adapts to us.. But that is in theory.

In practice I never heard OpenAI mention how they use chat logs for improving the model. They are either afraid to say, for privacy reasons, or want to keep it secret for technical advantage. But just think about the billions of sessions per month. A large number of them contain extensive problem solving. So the LLMs can collect experience, and use it to improve problem solving. This makes them into a flywheel of human experience.


They have more data on what people want from models?

Their SOTA models can generate better synthetic data for the next training run - leading to a flywheel effect?


Google wasn't the first mover in search. They were at least second if not third.


> What is Google's first mover moat?

AdSense


But _why_ did AdSense work? They had to bootstrap with eyeballs.

Claude has effectively no eyeballs. API calls != eyeballs.


It's like people forget Google is an ad company


But most of the money to be made in AI is B2B, no ? Not direct consumer products like ChatGPT being used by the public


*sigh*

This broken record again.

Just observe reality. OpenAI is leading, by far.

All these "OpenAI has no moat" arguments will only make sense whenever there's a material, observable (as in not imaginary), shift on their market share.


>What is OpenAI's first-mover moat?

The same one that underpins the entire existence of a little company called Spotify: I'm just too lazy to cancel my subscription and move to a newer player.


Not exactly a good sign for OpenAI considering Spotify has no power to increase prices enough such that it can earn a decent profit. Spotify’s potential is capped at whatever Apple/Amazon/Alphabet let them earn.


OpenAI has a lot more revenue than Claude.

Late in 2024, OpenAI had $3.7b in revenue. Meanwhile, Claude’s mobile app hit $1 million in revenue around the same time.


> Late in 2024, OpenAI had $3.7b in revenue

Where do they report these ?

edit i found it here https://www.cnbc.com/2024/09/27/openai-sees-5-billion-loss-t...

"OpenAI sees roughly $5 billion loss this year on $3.7 billion in revenue"


Brand - it's the most powerful first-mover advantage in this space.

ChatGPT is still vastly more popular than other, similar chat bots.


almost everyone I know is the same. 'Claude seems to be better and can take more data' is what I hear a lot.


I moved 100% over to deepseek. No switch cost. Zero.


These things aren't the same, though... yet.

ChatGPT is somewhat less censored (certainly on topics painful to the CCP), and GPT is multi-modal, which is a big selling point.

Depends on your use-case, of course.


One moat will eventually come in the form of personal knowledge about you - consider talking with a close friend of many years vs a stranger


Couldn't you just copy all your conversations over?


OpenAI does not have a business model that is cashflow positive at this point and/or a product that gives them a significant leg up in the same moat sense Office/Teams might give to Microsoft.


Companies in the mobile era took a decade or more to become profitable. For example, Uber and Airbnb.

Why do you expect OpenAI to become profitable after 3 years of chatgpt?


Interest rates have an effect too, Uber and Airbnb were starting in a much more fundraising friendly time.


High interest rates are supposed to force the remaining businesses out there to be profitable, so in theory, the startups of today should be far faster to profitability or they burn out.


True, but it makes it much more difficult to get started in the first place.


Nobody expects it but what we know for sure is that they have burnt billions of dollars. If other startups can get there spending millions, the fact is that openai won't ever be profitable.

And more important (for us), let the hiring frenzy start again :)


They have a ton of revenue and high gross margins. They burn billions because they need to keep training ever better models until the market slows and competition consolidates.


The counter argument is that they won't be able to sustain those gross margins when the market matures because they don't have an effective moat.

In this world, R&D costs and gross margin/revenue are inextricably correlated.


When the market matures, there will be fewer competitors so they won’t need to sustain the level of investment.

The market always consolidates when it matures. Every time. The market always consolidates into 2-3 big players. Often a duopoly. OpenAI is trying to be one of the two or three companies left standing.


> First mover advantage acquired and keeps subscribers.

Does it? As a chat-based (Claude Pro, ChatGPT Plus etc.) user, LLMs have zero stickiness to me right now, and the APIs hardly can be called moats either.


If it's for mass consumer market then it does matter. Ask any non-technical person around you. High chance is that they know ChatGPT but can't name a single other AI model or service. Gemini, just a distant maybe. Claude, definitely not -- I'm positive I'm hard pressed to find anyone in my technical friends who knows about Claude.


They probably know CoPilot as the thing Microsoft is trying to shove down their throat...


They also burnt a hell of a lot more cash. That’s a disadvantage.


I feel like AI tech just reverse scales and reverse flywheels, unlike the tech giant walls and moats now, and I think that is wonderful. OpenAI has really never made sense from a financial standpoint and that is healthier for humans. There’s no network effect because there’s no social aspect to AI chatbots. I can hop on DeepSeek from Google Gemini or OpenAI at ease because I don’t have to have friends there and/or convince them to move. AI is going to be a race to the bottom that keeps prices low to zero. In fact I don’t know how they are going to monetize it at all.


> DeepSeek just further reinforces the idea that there is a first-move disadvantage in developing AI models.

you are assuming that what DeepSeek achieved can be reasonably easily replicated by other companies. then the question is when all big techs and tons of startups in China and the US are involved, how come none of those companies succeeded?

deepseek is unique.


Deepseek is unique, but the US has consistently underestimated Chinese R&D, which is not a winning strategy in iterated games.


There seem to be a 100 fold uptick in jingoists in the last 3-4 years which makes my head hurt but I think there is no consistent "underestimation" in academic circles? I think I have read articles about the up and coming Chinese STEM for like 20 years.


Yes, for people in academia the trend is clear, but it seems that WallStreet didn't believe this was possible. They assume that spending more money is all you need to dominate technology. Wrong! Technology is about human potential. If you have less money but bigger investment in people you'll win the technological race.


I think Wall Street is in for surprise as they have been profiting from liquidating the inefficiency of worker trust and loyalty for quite some time now.

It think they think American engineering excellence was due to neoliberal inginuenity visavi the USSR, not the engineers and the transfer of academic legacy from generation to generation.


This is even more apparent when large tech corporations are, supposedly, in a big competition but at the same time firing thousands of developers and scientists. Are they interested in making progress or just reducing costs?


What does DeepSeek or really High Flyer do that is particularly exceptional regarding employees? HFT and other elite law or Hedge funds are known to have pretty zany benefits.


Orwellian Communism is the opposite of investing in people.


Whatever you think about the Chinese system, they educate hundreds of thousands of engineers and scientists every year. That's a fact.


Precisely. This is the view from the ivory tower.


That doesn't the calculus regarding the actions you would pick externally, in fact it only strengthens the point for increased tech restrictions and more funding.


Unique, ye, but isn't their method open? I read something about a group replicating a smaller variant of their main model.


Which brings the question, if LLMs are an asset of such strategic value, why did China allow the DeepSeek to be released?

I see two possibilities here, either that the CCP is not that all-reaching as we think, or that the value of the technology isn't critical, and that the release was further cleared with the CCP and maybe even timed to come right after Trump's announcement of American AI supremacy.


I really doubt there was any intention behind it at all. I bet deepseek themselves are surprised at the impact this is having, and probably regret releasing so much information into the open.


It's early innings, and supporting the open source community could be viewed by the CCP as an effective way to undermine the US's lead in AI.

In a way, their strategy could be:

1) Let the US invest $1 trillion in R&D

2) Support the open source community such that their capability to replicate these models only marginally lags the private sector

3) When R&D costs are more manageable, lean in and play catch up


It is hard to estimate how much it is "didn't care", "didn't know" or "did it" I think. Rather pointless unless there are public party discussion about it to read.


It will be assumed by the American policy establishment that this represents what the CCP doesn't consider important, meaning that they have even better stuff in store. It will also be assumed that this was timed to take a dump on Trump's announcement, like you said.

And it did a great job. Nvidia stock's sunk, and investors are going to be asking if it's really that smart to give American AI companies their money when the Chinese can do something similar for significantly less money.


I mean, it's a strategic asset in the sense that it's already devalued a lot of the American tech companies because they're so heavily invested in AI. Just look at NVDA today.


We have one success after ~two years of ChatGPT hype (and therefore subsequent replication attempts). That's as fast as it gets.


Your making some big assumptions projecting into the future. One that deepseek takes market position, two that the information they have released is honest regarding training usage, spend etc.

Theres a lot more still to unpack and I don’t expect this to stay solely in the tech realm. Seems to politically sensitive.


DeepSeek is profitable, openai is not. That big expensive moat won't help much when the competition knows how to fly.


DeepSeek is not profitable. As far as I know, they don’t have any significant revenue from their models. Meanwhile, OpenAI has $3.7b in revenue last reported and has high gross margins.


tell that to the stock market then, it might change the graph direction back to green.


I’m doing the best I can.


Deepseek inference API has positive margin. This however does not take into account R&D like salary and training cost. I believe OpenAI is the same in these aspects, at least before now.


Why would it matter? OP is experiencing an undesirable outcome, and assigning blame for the problem seems besides the point when the main goal is to change the outcome.


Because you can reverse the situation and criticize them for being offended by everything and difficult for you to work with.


you can, only if you have the critical amount of self confidence, appetite for conflict, lack the fear of losing them, and/or have another group of people who you can substitute after their loss. Thats a big ask for all but the most socially endowed of us.


If one can't figure out the causes of certain outcomes, how does one change the outcomes?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: