I believe Google has earned the most revenue of any business ever [1]
So if the idea is to unseat Google, and make LLMs that are monetized by ads -- well that would be a lot of revenue!
The problem is obviously that Google knows this, and they made huge investments in AI before anyone else
---
I guess someone wants to do to Google what Apple did to Microsoft in the mobile era -- take over the operating system that matters by building something new (mobile), not by directly trying to unseat Microsoft
The problem seems to be that no one has figured out what the network effect in LLMs is. Google has a few network effects, but the bidder / ad buyer network is very strong -- they can afford to pay out a bigger rev share than anybody else
Google also had very few competitors early on -- Yahoo was the most credible competitor for a long time. And employees didn't leave to start competitors. Whereas OpenAI has splintered into 5 or more companies, fairly early in its life
[1] at least according to the Acquired podcast, which is reputable
> I believe Google has earned the most revenue of any business ever [1]
By yearly revenue, the highest revenue company is Walmart, followed by Amazon, which make somewhere near twice the revenue of Alphabet (around 11th place, per https://en.wikipedia.org/wiki/List_of_largest_companies_by_r...). Especially if you account the inflation, the total lifetime revenues of the major oil companies will easily dwarf Google.
Google is nowhere close to earning the most revenue of any business ever.
> There's difference between running on a margin of 5% and 90% at those scale
OP said revenue, not profit. And neither of those numbers are relevant to Google, which runs a 32% (28%) operating (net) margin [1].
That said, yes, Google is the most profitable company in the world [2]. But its $116bn is not in a different league from Microsoft's $102bn, Apple's $99bn or Saudi Aramco's $96bn.
The value of revenue as a metric is that you can't really play financial games to screw with revenue, while there's lots of stuff you can do to inflate or deflate your profit. See, e.g., Hollywood accounting.
Neither revenue nor profit is the full story, though.
Of course, even you go by most profit, Saudi Aramco still has Google beat, because it turns out that being able to charge highest market price for oil that costs you $5/barrel to make and being around for decades gives you an astounding lifetime net profit.
The problem seems more to be that every last one of these companies are burning through cash at an astonishing rate. No one, least of all Google, is making a profit from AI. They keep dangling AGI in front of investors even though no one can really define what it is.
Companies like Uber and Amazon operated at a loss, true. But they had an actual product. And they didn't come close to the money Google, Meta, OpenAI and Microsoft are losing.
> I believe Google has earned the most revenue of any business ever
As people pointed, this is wrong.
But anyway, Google's revenue last year was enough to satisfy the smallest point of the interval the article points out. And barely so.
So if everything goes perfectly for the next 5 years capital-wise, and AI manages to capture Google's revenue, at the most optimist conditions, they will be able to break even with depreciation.
Honestly, that is better than what I was expecting. But it completely different from the picture you will see in any media.
>The problem seems to be that no one has figured out what the network effect in LLMs is.
At the very least, the exact same network effects with respect to advertising that search has. The vast majority of frequent ChatGPT users I know mostly use it like a search engine.
That said, those network effects will be massive. Ads in LLMs are going to be unprecedentedly lucrative, irrespective of the platform. Google/Meta currently charge so much for ads because they have such enormous proprietary profiles on users based on their search/communication history that they can offer advertisers the ability to target users with extraordinary granularity. But at the end of the day, the ad itself is static and obviously an ad. LLMs will make these ads dynamic and insidious, subtly injected into chats in the way a real-life conversation might happen to discuss products. LLMs will become the ultimate word-of-mouth advertisers, the final form of astroturfing.
I think the fibre optic analogy is a bad one. The key reason supply massively outstripped demand was that optical equipment massively improved in efficiency.
We are not seeing that (currently) with GPUs. Perf/watt has basically completely stalled out recently while tokens per user has easily increased in many use cases has went up 100x+ (take Claude code usage vs normal chat usage). It's very very unlikely we will get breakthroughs in compute efficiency in the same way we did in the late 90s/2000s for fiber optic capacity.
Secondly, I'm not convinced the capex has increased that much. From some brief research the major tech firms (hyperscalers + meta) were spending something like $10-15bn a month in capex in 2019. Now if we assume that spend has all been rebadged AI, and adjust for inflation it's a big ramp but not quite as big as it seems, especially when you consider construction inflation has been horrendous virtually everywhere post covid.
What I really think is going on is some sort of prisoners dilemma with capex. If you don't build then you are at serious risk of shortages assuming demand does continue in even the short and medium term. This then potentially means you start churning major non AI workloads along with the AI work from eg AWS. So everyone is booking up all the capacity they can get, and let's keep in mind a small fraction of these giant trillion dollar numbers being thrown around from especially OpenAI are actually hard commitments.
To be honest if it wasn't for Claude code I would be extremely skeptical of the demand story but given I now get through millions of tokens a day, if even a small percentage of knowledge workers globally adopt similar tooling it's sort of a given we are in for a very large shortage of compute. I'm sure there will be various market corrections along the way, but I do think we are going to require a shedload more data centres.
> We are not seeing that (currently) with GPUs. Perf/watt has basically completely stalled out recently while tokens per user has easily increased in many use cases has went up 100x+ (take Claude code usage vs normal chat usage). It's very very unlikely we will get breakthroughs in compute efficiency in the same way we did in the late 90s/2000s for fiber optic capacity.
At least for gaming, GPU performance per dollar has gotten a lot better in the last decade. It hasn't gotten much better in the past couple of years specifically, but I assume a lot of that is due to the increased demand for AI use driving up the price for consumers.
Difference is that with fiber you can put more data on same piece of glass or plastic or whatever just by swapping the parts at the end. And those are relatively small part of the cost. Most which is just getting the thing in place.
With GPUs and CPUs. You need to replace entire thing. And now they are the most expensive part of the system.
Other option is doing more with same computing power, but we have utterly failed with that in general...
It's been worse than that. Datacentres are needing basically completely rebuilt for especially Blackwell chips as they mostly require liquid cooling, not air cooling as before. So you don't need to just replace the hardware, you need to replace all the power AND provide liquid cooling, which means completely redesigning the entire datacentres.
This is a crucial question that often gets overlooked in the AI hype cycle. The article makes a great point about the disconnect between infrastructure investment and actual revenue generation.
A few thoughts:
1. The comparison to previous tech bubbles is apt - we're seeing massive capex without clear paths to profitability for many use cases.
2. The "build it and they will come" mentality might work for foundational models, but the application layer needs more concrete business cases.
3. Enterprise adoption is happening, but at a much slower pace than the investment would suggest. Most companies are still in pilot phases.
4. The real value might come from productivity gains rather than direct revenue - harder to measure but potentially more impactful long-term.
What's your take on which AI applications will actually generate enough value to justify the current spending levels?
There are two main threads I keep going back to when thinking about long term AI and why so many investors/statespeople are all in:
1) the labor angle: it’s been stated plainly by many execs that the goal is to replace double percent digits of their workforce with AI of some sort. Human wages being what they are, the savings there are meaningful and seemingly worth the gamble.
2) the military angle: the future of warfare seems to be autonomous weapons/vehicles of all sorts. Given the winner takes all nature of warfare, any edge you can get there is worth it. If not investing enough in AI means the US gets steamrolled by China in the Pacific (and other countries getting steamrolled by whomever China wants to sell/lend its tech to), then it seems to justify most any investment, no matter how ridiculous the current returns seem.
First of all, warfare is not winner take all. That's a sort of video game naive conception. The famous quote "War is politics by other means" is much more accurate.
When armed conflicts happen, it's because the belligerents have specific objectives, and very rarely is that objective "the total obliteration of the enemy" vs something more specific and concrete like territory, access to natural resources, the creation of a vassal state that can be exploited, or sometimes purely ideological (nationalist notions growing into the idea that a people are entitled to an empire).
Anyhow, the point is warfare is not a winner takes all game of obliteration.
But also the idea that the future of warfare will be all autonomous weapons is massively overweighting on drone hype, and ignoring that a lot of the fundamentals haven't changed since the days of Bismark, despite the rise of drones, computer vision algorithms, etc.
A simple example is Ukraine, where the battlefield is essentially defined by the combination of traditional artillery, mines and similar fortifications, and simple observation drones that don't have any particularly complex AI. The combination creates a 20 km "no go zone" that has nothing really to do with autonomy.
In fact, the more AI centric loitering munitions provided by US/EU firms have performed quite poorly in Ukraine, which is why they're favoring much more simple implementations like using hobby FPV drone components, or remote piloting via GSM modems, etc.
Will these technologies play an increasing role in future conflicts? Of course. But they're not going to completely upend things, or obliviate more traditional platforms.
Heck, another example would be simple hand coded AIs have been better than humans in dogfights for decades now. And it matters exactly zero for real world conflicts, because what fighter pilots actually do isn't a Top Gun movie.
Warfare isn’t really a winner takes all affair. Unless you absolutely crush your enemy most warfare ends in a stalemate of one form or another with the victor getting an advantage over the looser. In many cases medium tech advantages can be countered either with better logistics, willingness to trade losses or quality of weapons.
On 1, the railroads had a better point at that than the AI companies. They did allow dispersed industry to integrate, did multiply their countries GDP by a sizeable amount, and went bankrupt anyway.
If those companies replace a low 2-digits percentages of the developers, and capture their entire salary, it's still not enough to reach the depreciation numbers on the article.
On 2, that could justify it... Except that we are talking about fucking LLMs. What do anybody expect LLMs to do in a war that will completely obliterate some country?
I think the ai angle for warfare is overhyped. Most of the autonomous drone stuff happening in Ukraine is not running on bleeding edge nodes. It's radxa sbcs with process nodes from 10 years ago.
Right, the vast majority of compute for autonomous warfare will be at the edge. The latency/jammability of having to communicate with a massive datacenter halfway around the world running bleeding-edge models is a nonstarter. Not to mention that these models are overkill for something like an autonomous suicide drone that just needs a relatively simple CNN trained to recognize enemy uniforms/materiel/buildings/etc.
> it’s been stated plainly by many execs that the goal is to replace double percent digits of their workforce with AI of some sort
Even if we grant that this is possible, have any of these execs actually thought through what happens when their competitors also replace large chunks of their workforce with AI and then begin undercutting them on price? The idea that "our prices will stay exactly the same, but our salary costs will go to zero and become pure profit instead!" is delusional even if AI can actually replace large numbers of people, which itself is quite doubtful.
Presumably if your competitors go to 0 workers before you do they win but in practice that’s unlikely to work. Most companies would be better off buying mature tech once clear savings opportunity materialize.
This is the main thing that's been bugging me about the AI discussion. People seem to forget that capitalism is competitive, and if everyone gains the same advantage, then it's not an advantage. If the cost of labor goes down, it means companies will either need to lower their prices or increase their investment in other areas (e.g. hiring even more people now that they're cheaper).
Unless you're a monopoly, I don't see how AI will lead to these massive cost savings everyone is hoping for.
> Unless you're a monopoly, I don't see how AI will lead to these massive cost savings everyone is hoping for
"If the cost of labor goes down" and "companies...lower their prices," that means cost savings for every one of their customers. If they "increase their investment in other areas," that means lower costs of capital for all of their investments.
You're arguing that the gains from AI don't look likely to be concentrated. That's good! It's not an argument that AI won't be economically revoluationary (and value adding).
> AI companies are competing with each other for that revenue so total spend will go down
You're describing elasticity. None of this is particularly novel. If there is sufficient demand, the thesis is met: returns may not be astronomical, but they'll be positive for at least some of the major players. (Those with the most efficient operations or ability to command a price premium.)
The comparison to railroad infrastructure is interesting.
I think the author is wrong on this point however:
> Today’s tech just cannot do what will be required of it (AI shouldn’t be dispensing medication when it can’t even count to 7).
The failures of AI are thought provoking, and more so when considered together with other results where AI performs at near expert level on challenging benchmarks. However, perfect reasoning is hardly a requirement. Most humans are not particularly good at reasoning, and most jobs do not need it. Both humans and AI can use calculators and other tools. All that's needed is that the AI is more or less as good as a human, while requiring much less pay.
A good exercise to appreciate the current state of AI might be to ask AI to write an essay about this topic ("how much revenue is needed to justify current AI spend, and draw parallels to the dotcom boom and building the transcontinental railroad"). Try it with two different models, using the deep research mode. I expect the results would be humbling.
...
So, in summary: We likely need on the order of hundreds of billions to low-trillions of dollars annually in AI revenue to justify the present level of infrastructure and model investment. Current realized revenues are many orders of magnitude below that.
But that’s the cold math. History suggests that such math often overlooks strategic externalities, spillover effects, hype, and speculative capital flows.
> This is one of those rather surreal situations where everyone senior in this ecosystem knows that the math doesn’t work, but they don’t know that everyone else also knows this. They thought that they were the foolish ones, who simply didn’t get it.
I don’t know if it’s that surreal or unexpected. There’s a reason “The Emperors Clothes” is such a classic, enduring, fable. It’s happened before. It’ll happen again.
Not shading the article. All good points, just was surprised the author threw this bit in.
Railroads and fibre are better examples. Tulips are actually fucking useless as a productive asset. Railroads, fibre-optic cables, power production and datacentres are not.
In current system if you do not do actual straight up criminal fraud you get charged for you get to keep all the money you got on the way. So even if math never makes sense there is money to be earned for the time being. And then when it fails, well there is always the next scheme. And next round of people who believe they can extract their share on the way.
Say promises about AGI. I would expect company to show extensive theoretical work with error bars and such exactly when they can deliver it. And if sufficiently proven for that work to be incorrect or over promising in slightest to fully pay back investors for any losses.
Saying you're hoping to develop some tech that hasn't been done before isn't actually fraud. Everyone knows it's uncertain as to timing and if they'll be able to do it.
“Railway mania” in the UK in the 1840s and 1860s “involved capital investments of 15% to 20% of GDP”. US GDP is around $30 trillion. [0]
Between 1900 and 1929 the total capital invested in rail stocks and bonds in the US grew from $10.8bn (against GDP of $21.2bn) to $21.4bn [1] (1929 GDP was $104.6 billion). [2]
So the current AI capex doesn’t really seem too far fetched by comparison.
Apple’s revenue went from $13.9bn in 2005 to $391bn in 2024.
Google’s revenue went from $6.1bn to $348bn over the same period.
Microsoft’s revenue was $197.5m in 1986, $345.9m in 1987, $591m in 1988, $804.5m in 1989, $1183m in 1990, $1843m in 1991, $14.5bn by 1998 and $19.75bn in 1999.
So revenue 10x’d from ‘86 to ‘91 and again from ‘91 to ‘99.
Those are all nominal unadjusted figures, but all three companies are arguably “category defining”.
For OpenAI to go from public launch to what looks like ~$12 billion projected annualised revenue so quickly says quite a lot. If it only follows the Microsoft trajectory that would be $120bn by 2030.
But going back to the railways point: the impact of railways wasn’t measured in the revenue of just one company, but rather across the whole physical supply chain.
If you assume that AI could be as transformative to the digital supply chain (and everything it touches) then you could argue that investments of 20% of global GDP wouldn’t be crazy.
I don't get why ads are never mentioned in the article. The current use cases of GenAI (chatbots, generate [art] for me,...) have extremely obvious monetization angles through ads, and then there's a positive chance that they can bring in revenue through more ways than that (they already do in the form of subscriptions, eg). It might be that the economics still don't work out, but at least it should be considered?
> 90% of the reason I use chat gpt is that it doesn't have ads. Shove them in and usage goes way down.
Don't you think that people would have felt similarly about early Google search?
> any type of automation (the big promise of AI) ads don't matter
I would have thought that ads have no place in an OS, but they proved me wrong on that. How sure are you they won't prove you wrong on automation ("summarize this text for me - here you go, shall I run it through Grammarly now?", "build this app for me - [uses freemium tool instead of equally capable FOSS]",...)? Never underestimate the potential for enshittification.
On one of the latest Odd lots episodes finally an analyst had an investment thesis that made sense to me:
They think they are building an AI god.
If you think of it in religious terms it suddenly makes sense. Expected rate of return? One scenario has has infinite expected return (some kind of pascals wager/mugging)!
Of course there will be no AGI. Just a planet we'll have to live on where those deluded idiots wasted our resources on some boondoggle. Maybe this kind of concentration of power is a bad thing? I think we are going to get to those kind of questions once the party is over.
One of the things I’d love to know is if you agree with this view what is an average person to do? You can always short the whole market but that’s crazy risky and as the article says this could go on for some time.
"the industry is spending over $30 billion a month (approximately $400 billion for 2025) and only receiving a bit more than a billion a month back in revenue."
That's called a "bubble". Obviously, this time it is different until it isn't.
I own several books of trig and other tables, three slide rules and a couple of calculators, a working Commodore 64 and an IT consultancy company.
We are fiddling with LLMs as yet another tool. We are getting some great results but not earth shattering.
Tulips are very pretty flowers. I have several dozen in my garden. I have some plants that are way more valuable than tulips in my garden too.
Oh dear. I didn't even mention my crocosmia stash by name and I pissed someone off. They are jolly expensive to buy and I have a good 30' by 5' bed stuffed full of them. A crocosmia plant in a pot costs about £5-20. I've got loads of them. You shoudl see my acer (maple) collection. Mind blown.
Oh, AI.
It is artificial but it is not intelligent. A LLM (int al) is a marvelous thing. I find sheer joy in conversing with a "gpt-oss-20b F16" that runs on a £600 GPU and a slack handful of CPU and RAM because so little gives so much.
Interesting comparison! In inflation adjusted dollars, entirety of the Apollo program cost roughly $325 billion in total (2025 inflation adjusted). That was for 21 launches over 11 years, 11 crewed launches, 6 of which made it to the moon. We can divide that outa couple different ways and get either $17 billion per launch, or $32 billion per year, or if we want to get bigger numbers, $32 billion per crewed launch, or $59 per flight that landed on the moon (Apollo 11,12,14,15,16,17).
Let's use either $17 billion per launch or $32 b per year. We could compare each launch to training a whole new model, though OpenAI hasn't released official numbers of that,
so any comparison would a bit specious. There are some public guesses so I'll reproduce them here and link to my source, but again, they're guesses. GPT-2 is estimated at costing roughly $100,000 to train. OpenAI did say it cost $257/hour of training, but hasn't said how many hours it took to train, and the assumption is that's just the compute cost, that's not fully loaded (eg researcher salary and office rent and etc not included) [1]. The guess is by Karpathy though, so I'd give it a lot of weight in being directionally accurate.
GPT-4 is guessed at costing "more than $100 million" according to sama [2] and each run for a train of something GPT-5 size at $1 billion or more (according to sama)(of which you'd need to make several runs before getting it right, so to speak) [3]. Again though these numbers numbers should include copious amounts of NaCl.
That's just for OpenAI, but we can assume the other labs are using roughly as much. It's a bit different than the $400 billion/year that the linked article we're posting on gives, but it's a differently shaped number.
If the whole of the Apollo program cost $350 billion (2025 dollars) and we're doing $400 b per year, the comparison is an entire 21 launch Apollo space program in a year, or a single Apollo launch per
month. Which is still insane, mind you.
The other stat to look at though, because these are unfathomably large numbers, is the how much Apple invested in China, according to Patrick McGee, author of the book Apple in China. according to his book, Apple invested $275 billion into China over 5 years, or $55 billion/year in workers training and infrastructure building. We're comparing years to months here, but Apple is a singular company in the broader tech industry, and their $275 billion investment arguably paid off. Selling iPhone is a rather more proven area than LLMs though, so there's that.
> As you can imagine, when you’re the vendor, the customer and the investor in a company, there’s a strong incentive to artificially inflate the numbers by signing preferable contracts that use very large numbers, and then round-trip the capital.
It's an interesting thought experiment, but not sure it's the entire story.
Imagine at the start of the electrification era people went "We'd need to build loads of cables and power plants and stuff that's expensive, lets just stick to steam power".
It's not a bet on this making sense via pedestrian business economics but rather that it'll be a game changer.
...whether that pans out is a technological and societal question, not an economic one in my mind
> Imagine at the start of the electrification era people went "We'd need to build loads of cables and power plants and stuff that's expensive, lets just stick to steam power"
False dichotomy. There are literally infinite options between ignoring AI and spending a quarter of a trillion on it annually.
The electrification era was quite gradual compared to this. Generators invented around 1870, 70% of US houses hooked up by 1930, so about sixty years. Altman and friends seem in an awful hurry.
Fibre and railroads, again, are really good comparisons. Both involved busineasses built on advancing technology. If you built your network before signalling, your costs were immediately higher than a competitor who rolled out more slowly. Similarly, do we really think Nvidia has had its last say in GPUs?
> We are entering an era where computing capital, intellectual capital, and military capital will dominate
These are bullshit terms. Capital is capital. Military production, IP production and yes, AIs running in datacentres and on the grid, are all subject to economic forces. (Folks argued railroads were a different form of capital in the 19th century, too. And fibre optics. And tulips. And dot-com companies. And computer-assembled American mortgage instruments.)
We might be investing for a golden future. We might be the Soviet Union baited into unsustainable spending commitments. The answer to these questions isn't in pretending this time is different, or that economics can be suspended when it comes to certain questions of production and return.
Which capital is most advantageous in any specific situation is dependent on context.
We could probably debate how holding onto piles of green paper doesn’t provide much advantage in certain contexts, but I suspect you’d agree with that.
My proposal is that there’s a high likelihood the bet is that green paper matters less than high powered AI systems, and as far as I can tell, that’s a reasonable bet.
The people investing in AI companies (and the big players spending in AI) are seeking Artificial General Intelligence (AGI). It's the only way they get a return on their capital.
They are investing so they can get there first. Money basically becomes meaningless at that point, whoever owns the AGI owns the world. That's the only way to get a return on that investment.
Or the AGI owns its owners and the rest of the world; getting it to respect its owner's wishes remains an unsolved problem which many people still seem to think isn't worth even spending time figuring out at this point.
> people investing in AI companies (and the big players spending in AI) are seeking Artificial General Intelligence (AGI). It's the only way they get a return on their capital
I'm assuming you're trying to get me to say "you don't, they look the same".
I'm just giving the view from an investors point of view -- you don't expect these to eventually run like a normal business where their revenue exceeds their cost. You expect them to make as much revenue as they can while they spend more than the make to get to AGI.
> I'm assuming you're trying to get me to say "you don't, they look the same"
No, I'm genuinely asking for a test. Pursuing ancilliary revenue would, to me, indicate they're behaving more like a business and less like a moonshot.
> giving the view from an investors point of view -- you don't expect these to eventually run like a normal business where their revenue exceeds their cost
I've invested in AI companies. Every pitch material I've seen projects forward to profitability.
Pursuing ancillary revenue can still be a moonshot. Look at SpaceX. Musk has specially said his goal is Mars. But they still build ships and sell commercial services to fund continued development.
I've invested in AI companies too, but I'm not taking about that. I'm talking about foundation model companies (namely OpenAI, Anthropic, Amazon, Google, etc.).
I'm sure the pitches for Anthropic and OpenAI show paths to profit, because you'd have to, and I'm sure the internal docs at Google and Amazon show the same thing, but that doesn't mean it's not a moonshot.
You'd have to show that if you want to get funded at all.
> Pursuing ancillary revenue can still be a moonshot. Look at SpaceX. Musk has specially said his goal is Mars
SpaceX is profitable. Mars isn't "the only way" SpaceX's investors "get a return on their capital."
> I'm talking about foundation model companies (namely OpenAI, Anthropic, Amazon, Google, etc.)
I am, too. (The private ones, at least.)
> You'd have to show that if you want to get funded at all
Doesn't this undermine the argument that "the people investing in AI companies" expect "the only way they get a return on their capital" is if their horse invents AGI?
> the industry is spending over $30 billion a month (approximately $400 billion for 2025) and only receiving a bit more than a billion a month back in revenue.
I suspect that this revenue number is a vast underestimation, even today, ignoring the reality of untapped revenue streams like ChatGPT's 800M advertising eyeballs.
1. Google has stated that Gemini is processing 1.3 quadrillion tokens per month. Its hard to convert this into raw revenue; its spread across different models, much of it is likely internal usage, or usage more tied to a workspace subscription rather than per-token API billing. But to give a sense of this scale, this is what that annualized revenue looks like priced at per-token API pricing for their different models, assuming a 50/50 input/output: Gemini 2.5 Flash Lite: ~$9B/year, Gemini 2.5 Flash: ~$22.8B/year, Gemini 2.5 Pro: ~$110B/year.
2. ChatGPT has 800M weekly active users. If 10% of these users are on the paid plan, this is $19.2B/year. Adjust this value depending on what percentage of users you believe pay for ChatGPT. Sam has announced that they're processing 6B API tokens per minute, which, again depending on the model, puts their annualized API revenue between $1B-$31B.
3. Anthropic has directly stated that their annualized revenue, as of August, was $5B [2]. Given their growth, and the success of Claude 4.5, its likely this number is more around $6B-$7B right now.
So, just with these three companies, which are the three biggest involved in infrastructure rollouts, we're likely somewhere in the realm of ~$30B/year? Very fuzzy and hard, but at the very least I think its weird to guess that the number is closer to like $12B. Its possible the article is basing its estimates on numbers from earlier in 2025, but to be frank: If you're not refreshing your knowledge on this stuff every week, you're out of date. Its moving so fast.
> 2. ChatGPT has 800M weekly active users. If 10% of these users are on the paid plan, this is $19.2B/year. Adjust this value depending on what percentage of users you believe pay for ChatGPT. Sam has announced that they're processing 6B API tokens per minute, which, again depending on the model, puts their annualized API revenue between $1B-$31B.
OpenAI announced a few months ago that it had finally cracked $1B in monthly revenue (intriguingly, it did so twice, which makes me wonder how much fibbing there is in these statements).
I'll also say this: the fact that AI companies prefer to tout their usage numbers rather than their revenue numbers is a sign that their revenue numbers isn't stellar (especially given that several of the Big Tech companies have stopped reporting AI revenue as separate call-outs).
> OpenAI announced a few months ago that it had finally cracked $1B in monthly revenue (intriguingly, it did so twice, which makes me wonder how much fibbing there is in these statements).
I believe this is incorrect; as far as I've heard, an anonymous source leaked that OpenAI had hit $12B in annualized revenue a few months ago [1]. I do not personally put any weight in leaks, and prefer to operate on data that has been officially announced.
"Reported to shareholders" is a whole heck of a lot more accurate than some person on the internet playing numerology to turn similarly sketchily-sourced user numbers into revenue numbers.
That number was not reported as "reported to shareholders". Again, its astounding how wrong you keep getting this. It was leaked to The Information by an anonymous source who claimed to be a shareholder who received this financial disclosure.
Its all about citation. Everyone who read my numerology estimations above knew that they were estimations. You, on the other hand, lied about two different leaks of OpenAI's revenue numbers as being "official".
I think you're underestimating how quickly users can move to another platform if something better / cheaper shows up unless there are user network effects that benefit / keep people on a platform. We've lived through several of these - yahoo/lycos to Google. A bunch of terrible providers to GMail, various messengers to Apple/WhatsApp/line dominating countries etc. This space seems ripe for the second mover advantage effect
Yes, I moved from Yahoo to Google in multiple apps. And yet, once I subscribed to ChatGPT, it satisfies what I need, despite all the noise I hear about the alternatives.
> even today, ignoring the reality of untapped revenue streams like ChatGPT's 800M advertising eyeballs.
Respectfully, the idea of sticking ads in LLMs is just copium. It's never going to work.
LLMs' unfixable inclination for hallucinations makes this an infinite lawsuit machine. Either the regulators will tear OpenAI to shreds over it, or the advertisers seeing their trademarks hijacked by scammers will do it in their stead. LLMs just cannot be controlled enough for this idea to make sense, even with RAG.
And if we step away from the idea of putting ads in the LLM response, we're left with "stick a banner ad on chatgpt dot com". The exact same scheme as the Dotcom Bubble. Worked real well that time, I hear. "Stick a banner ad on it" was a shit idea in 2000. It's not going to bail out AI in 2025.
The original content that LLMs paraphrase is itself struggling to support itself on ads. The idea that you can steal all those impressions through a service that is orders and orders of magnitude more expensive and somehow turn a profit on those very same ads is ludicrous.
While it didn't work in 2000, "just stick ads on it" does work for Google and Meta, driving over $400B in combined annual advertising revenue. Their model, today, is far more relevant than calling back to antiquated banner advertising models from 25 years ago; you'll have to convince me that Google and Meta's model cannot work for OpenAI, which you have not adequately done.
I will point out that this is contentious, both of these companies are subject to regulatory investigations around their monopolistic practices & the matter that they are pretty much the only companies for which this is profitable.
> Their model, today, is far more relevant than calling back to antiquated banner advertising models from 25 years ago
Hardly. It's fundamentally the same model; Content with an advertisement next to it. Whether that is a literal banner ad or a disguised search result, none of the formfactors are new.
For all the advances in ad-tech, CPMs are still the same old dogshit they were shortly after the dotcom bubble, looking better only because of inflation.
> you'll have to convince me that Google and Meta's model cannot work for OpenAI, which you have not adequately done.
That's the "orders and orders of magnitude more expensive" part. Neither Google Search nor Facebook are that profitable per single ad, they make it up in volume. LLMs are simply more expensive to operate than a search engine or a glorified web forum. Can OpenAI cut down their opex and amortized-cap costs down to less than the half-penny they'd extract with good CPMs? Probably not.
But there's a deeper layer. The "fund AI with ads" model paints a scenario in which OpenAI would have to overtake Google; They need the ad-tech monopoly to push up CPMs or you can cut that half-penny down an order of magnitude.
This is unlikely. To make ChatGPT work as a search engine requires all the infrastructure of a search engine. Ipso-facto they are always more expensive than a standalone search engine.
Yet at the same time, people only care about ChatGPT as search because Google Search is shit now. Were ChatGPT to ever become a serious threat to Google, Google can simply turn off the search-enshittifier for a bit and wipe out ChatGPT's marketshare, and push them into bankruptcy by drawing down CPMs below OpenAI's sustainability level.
>That's the "orders and orders of magnitude more expensive" part.
It's not orders of magnitudes more expensive and if we take the most recent report for the half year, then they need a per quarter ARPU of $8 for their free users to be profitable with billions to spare. That is low. This is not some herculean task. They don't need to 'overtake google' or whatever. They literally don't need to change anything.
You can't average out the userbase like that because the individual usage of the service varies wildly, and advertising revenue is directly tied to amount of usage.
Especially because OpenAI highly inflates user figures.
> It's not orders of magnitudes more expensive
This too is skewed by averaging with users who barely use the service.
So if the idea is to unseat Google, and make LLMs that are monetized by ads -- well that would be a lot of revenue!
The problem is obviously that Google knows this, and they made huge investments in AI before anyone else
---
I guess someone wants to do to Google what Apple did to Microsoft in the mobile era -- take over the operating system that matters by building something new (mobile), not by directly trying to unseat Microsoft
The problem seems to be that no one has figured out what the network effect in LLMs is. Google has a few network effects, but the bidder / ad buyer network is very strong -- they can afford to pay out a bigger rev share than anybody else
Google also had very few competitors early on -- Yahoo was the most credible competitor for a long time. And employees didn't leave to start competitors. Whereas OpenAI has splintered into 5 or more companies, fairly early in its life
[1] at least according to the Acquired podcast, which is reputable
edit: oops, it was profit, not revenue
https://www.acquired.fm/episodes/google
Google with this business model makes more profits than any other company, ergo tautologically, is the most magical business model ever discovered.
reply