Having an app that tells you that you have a thing is not the same as physically having the thing. Owning gold in that sense is no different than owning bitcoin: They both rely on a societal fiction that you can actually get the value of the asset when you expect that you can. In other words: The presence of a reliable financial and legal system is what makes the myth real for both assets.
We forget that the idea that the gold could be “further away from”/“not actually in the physical possession of” the human who “owns” was not universally accepted at all points in history. In a weird way, bitcoin is more forthright in that it says out loud “trust me this is valuable because we all agree it just is” where non-custodial ownership of gold tries to imply that there’s less need for this trust because there’s an actual shiny rock on the planet somewhere and humans have a history of valuing shiny rocks.
Ultimately it seems like it’s the “trust” part that mostly matters. Bitcoin is certainly harder to trust at the moment. But gold is rallying because there is less trust overall. That’s not good for either asset.
Every successful software project reaches an equilibrium between utility for its operators and bugs, and that point very rarely settles at 0% bugs [1].
When software operators tolerate bugs they’re signaling that they’re willing to forego the fix in exchange for other parts of the feature that work and that they need.
The idea that consumers will somehow not need the features that they rely on anymore is completely wrong.
That leaves the tolerable bugs, but those were always part of the negotiation: Coding agents doesn’t change that one bit. Perhaps all it does it allow more competitors to peel away those minority groups of users who are blocked by certain unaddressed bugs. Or maybe it gets those bugs fixed because it’s cheaper to do so.
> I also do not see it happening at scale while competition is considered the default operating mode of society at large.
You don’t even need competition between people and orgs, just between solutions that work more-or-less equally but come with different second-order tradeoffs. Consider two approaches that solve a company’s problem equally but create different amounts of work for different people in the organization. Which solution to choose? Who gets to decide, based on what criteria? As soon as even a little scale creeps in this is inescapable.
> The French government… announced last week that 2.5 million civil servants would stop using video conference tools from U.S. providers — including Zoom, Microsoft Teams, Webex and GoTo Meeting — by 2027 and switch to Visio, a homegrown service.
> I think the reality is that… people are willing to take on MUCH greater risk today for reward than they were prior to the pandemic.
I’ll quibble that people have no idea of the risks they’re taking. I read somewhere that amateur stock traders spent something like 4 minutes researching their purchase. Balanced portfolios are just too boring and tedious.
> Anthropic relies heavily on a combination of chips designed by Amazon Web Services known as Trainium, as well as Google’s in-house designed TPU processors, to train its AI models. Google largely uses its TPUs to train Gemini. Both chips represent major competitive threats to Nvidia’s best-selling products, known as graphics processing units, or GPUs.
So which leading AI company is going to build on Nvidia, if not OpenAI?
"Largely" is doing a lot of heavy lifting here. Yes Google and Amazon are making their own GPU chips, but they are also buying as many Nvidia chips as they can get their hands on. As are Microsoft, Meta, xAI, Tesla, Oracle and everyone else.
google buys nvidia GPUs for cloud, I don't think they use them much or at all internally. The TPUs are both used internally, and in cloud, and now it looks like they are delivering them to customers in their own data centers.
The various AI accelerator chips, such as TPUs and NVidia GPUs, are only compatible to extent that some of the high level tools like PyTorch and Triton (kernel compiler) may support both, which is like saying that x86 and ARM chips are compatible since gcc supports them both as targets, but note this does not mean that you can take a binary compiled for ARM and run it on an x86 processor.
For these massive, and expensive to train, AI models the differences hit harder since at the kernel level, where the pedal hits the metal, they are going to be wringing every last dollar of performance out of the chips by writing hand optimized kernels for them, highly customized to the chip's architecture and performance characteristics. It may go deeper than that too, with the detailed architecture of the models themselves tweaked to best perform on a specific chip.
So, bottom line is that you can't just take a model "compiled to run on TPUs", and train it on NVidia chips just because you have spare capacity there.
1. Sit out and buy the tech you need from competitors.
2. Spend to the tune of ~$100B+ in infra and talent, with no guarantee that the effort will be successful.
Meta picked option 2, but Apple has always had great success with 1 (search partnership with Google, hardware partnerships with Samsung etc.) so they are applying the same philosophy to AI as well. Their core competency is building consumer devices, and they are happy to outsource everything else.
This whole thread is about whether the most valuable startup of all time will be able to raise enough money to see the next calendar year.
It's definitely rational to decide to pay wholesale for LLMs given:
- consumer adoption is unclear. The "killer app" for OS integration has yet to ship by any vendor.
- owning SOTA foundation models can put you into a situation where you need to spend $100B with no clear return. This money gets spent up front regardless of how much value consumers derive from the product, or if they even use it at all. This is a lot of money!
- as apple has "missed" the last couple of years of the AI craze, there has been no meaningful ill effects to their business. Beyond the tech press, nobody cares yet.
I mean, they tried. They just tried and failed. It may work out for them, though — two years ago it looked like lift-off was likely, or at least possible, so having a frontier model was existential. Today it looks like you might be able to save many billions by being a fast follower. I wouldn’t be surprised if the lift-off narrative comes back around though; we still have maybe a decade until we really understand the best business model for LLMs and their siblings.
I think you are right. Their generative AI was clearly underwhelming. They have been losing many staff from their AI team.
I’m not sure it matters though. They just had a stonking quarter. iPhone sales are surging ahead. Their customers clearly don’t care about AI or Siri’s lacklustre performance.
> Their customers clearly don’t care about AI or Siri’s lacklustre performance.
I would rather say their products didn’t just loose in value for not getting an improvement there. Everyone agrees that Siri sucks, but I’m pretty sure they tried to replace it with a natural language version built from the ground up, and realised it just didn’t work out yet: yes, they have a bad, but at least kinda-working voice assistant with lots of integrations into other apps. But replacing that with something that promises to do stuff and then does nothing, takes long to respond, and has less integrations due to the lack of keywords would have been a bad idea if the technology wasn’t there yet.
We do know that they made a number of promises on AI[1] and then had to roll them back because the results were so poor[2]. They then went on to fire the person responsible for this division[3].
That doesn't sound like a financial decision to me.
They tried to do something that probably would have looked like Copilot integration into Windows, and they chose not to do that, because they discovered that it sucked.
So, they failed in an internal sense, which is better than the externalized kind of failure that Microsoft experienced.
I think that the nut that hasn't been cracked is: how do you get LLMs to replace the OS shell and core set of apps that folks use. I think Microsoft is trying by shipping stuff that sucks and pissing off customers, while Apple tried internally declined to ship it. OpenClaw might be the most interesting stab in that direction, but even that doesn't feel like the last word on the subject.
Well they tried and they failed. In that case maybe the smartest move is not to play. Looks like the technology is largely turning into a commodity in the long run anyways. So sitting this out and letting others make the mistakes first might not be the worst of all ideas.
Sure, Siri is, but do people really buy their phone based off of a voice assistant? We're nowhere near having an AI-first UX a la "Her" and it's unclear we'll even go in that direction in the next 10 years.
I think Apple is waiting for the bubble to deflate, then do something different. And they have the ready to use user base to provide what they can make money from.
If they were taking that approach, they would have absolutely first-class integration between AI tools and user data, complete with proper isolation for security and privacy and convenient ways for users to give agents access to the right things. And they would bide their time for the right models to show up at the right price with the right privacy guarantees.
They apparently are working on and are going to release 2(!) different versions of siri. Idk, that just screams "leadership doesn't know what to do and can't make a tough decision" to me. but who knows? maybe two versions of siri is what people will want.
It sounds like the first one, based on Gemini, will be more a more limited version of the second ("competitive with Gemini 3"). IDK if the second is also based on Gemini, but I'd be surprised if that weren't the case.
Seems like it's more a ramp-up than two completely separate Siri replacements.
Nvidia had the chance to build its own AI software and chose not to. It was a good choice so far, better to sell shovels than go to the mines - but they still could go mining if the other miners start making their own shovels.
If I were Nvidia I would be hedging my bets a little. OpenAI looks like it's on shaky ground, it might not be around in a few years.
They do build their own software, though. They have a large body of stuff they make. My guess is that it’s done to stay current, inform design and performance, and to have something to sell enterprises along with the hardware; they have purposely not gone after large consumer markets with their model offerings as far as I can tell.
That’s interesting, I didn’t know that about Anthropic. I guess it wouldn’t really make sense to compete with OpenAI and everyone else for Nvidia chips if they can avoid it.
It's almost as if everyone here was assuming that Nvidia would have no competition for a long time, but it has been known for a long time, there are many competitors coming after their data center revenues. [0]
> So which leading AI company is going to build on Nvidia, if not OpenAI?
It's xAI.
But what matters is that there is more competition for Nvidia and they bought Groq to reduce that. OpenAI is building their own chips as well as Meta.
The real question is this: What happens when the competition catches up with Nvidia and takes a significant slice out of their data center revenues?
> [Seth is using AI to try] to take over the job of the farmer. Planting, harvesting, etc. is the job of a farmhand (or custom operator).
Ok then Seth is missing the point of the challenge: Take over the role of the farmhand.
> Everyone is working to try to automate the farmhand out of a job, but the novelty here is the thinking that it is actually the farmer who is easiest to automate away.
Everyone knows this. There is nothing novel here. Desk jockeys who just drive computers all day (the Farmer in this example) are _far_ easier to automate away than the hands-on workers (the farmhand). That’s why it would be truly revolutionary to replace the farmhand.
Or, said another way: Anything about growing corn that is “hands on” is hard to automate, all the easy to automate stuff has already been done. And no, driving a mouse or a web browser doesn’t count as “hands on”.
> all the easy to automate stuff has already been done.
To be fair, all the stuff that hasn't been automated away is the same in all cases, farmer and farmhand alike: Monitoring to make sure the computer systems don't screw up.
The bet here is that LLMs are past the "needs monitoring" stage and can buy a multi-million dollar farm, along with everything else, without oversight and Seth won't be upset about its choices in the end. Which, in fairness, is a more practical (at least less risky form a liability point of view) bet than betting that a multi-million dollar X9 without an operator won't end up running over a person and later upside-down in the ditch.
He may have many millions to spend on an experiment, but to truly put things to the test would require way more than that. Everyone has a limit. An MVP is a reasonable start. v2 can try to take the concept further.
More specifically, when employees are granted options contracts the strike price of those contracts is based on the last valuation of the company prior to the grant. If all is going well and the valuation is increasing those options are also increasing in value. Here we have a sale which values the company lower than the prior valuation. Recent option grants will likely be underwater, earlier grants would still be profitable.
> The valuations you see, like $12bn, are for preferred stock.
No, the valuation is for the whole company, all of its shares, preferred and common. How this value is distributed among shareholders depends on the deal, but generally there is a “seniority”, roughly: creditors (debt holders) are paid first, preferred shares next, then common shareholders last. This order can be negotiated as part of the sale.
> So no employees got stock priced at $12bn, but all of them get paid at a $5.15bn valuation.
It’s just not possible to know what each individual employee’s outcome is. We don’t know how much of that 5.5 billion will be left over for common shareholders including the employees. Note that employees have received salaries so their overall outcome is greater than zero dollars, but perhaps their total compensation outcome is lower than they hoped for the time they put in.
> Not saying they did well, but depending on the 409a valuations, they still might have made money.
Yes, some might have and some might have not. We just don’t know without more details.
Edit: singron’s answer (sibling comment) attempts to model the employee outcome in a rough but reasonable way.
The way they get to $12.5bn is multiplying the preferred share price by the total outstanding shares. But the common shares, while still included in that calculation, are not worth the same amount of money.
They have a different strike price for options that is set via a 409a.
It’s possible that employees got, at the peak, grants with strike prices at a $2bn 409a valuation. We don’t know. What we do know is that no employee ever got grants with a strike price of a $12.5bn valuation. That’s just not how this works.
The tariffs haven’t really started hitting consumers yet, believe it or not. Major retailers rushed to get their Christmas goods into the U.S. before the deadlines. In 2026 we’ll start to see more of the impacts.
reply