I do feel there might be a day of reckoning where Nvidia bet the farm too hard on this AI bubble and it ends up blowing up in their face.
I hope gamers, systems integrators, and regular PC enthusiasts don't have memories of goldfish and go back to business as usual. It needs to hurt Nvidia in the pocketbook.
Will this happen? Unlikely, but hope springs eternal.
NVidias share price will take a hit when consolidation starts in AI, because their business won't be growing as fast as their PE ratio implies. Also the circular deals could hurt them if one of the AI providers they've invested in goes bust.[1],[2]. They won't go out of business but holders of their shares may lose lots of money. But will this happen after Anthropic and OpenAI have their IPOs, possibly next year? NVidia stands to make a lot on paper if those IPOs do well.
If OpenAI has their IPO, this is likely going to result in retail getting fleeced, given how their return on their investments to date has been absolutely pitiful. They are seeing revenues of around $13 billion for 2025, with an alleged over $100 billion or more by 2030, but the investments they are making are orders of magnitude greater. Who is ultimately going to pay for this?
Surely OpenAI has customers buying their pro packages for ChatGPT, but that can't really be it. And businesses are starting to realize that AI can't replace the workforce that easily either.
Hardly taking this personally. Just calling out how I see it going most likely. Also... Nvidia has done quite a bit unethically. Namely violating anti-monopoly laws (though with our current US administration - they may as well be not worth the paper they are printed on), screwing with product reviewers, pulling a 90s-era Microsoft to obliterate their competition at all costs, and screwing over their board partners, like EVGA. GamersNexus on Youtube has covered plenty of this.
That said, although AI has some uniquely good applications, this AI mania is feeding into some ridiculous corporate feedback loop that is having a negative impact on the consumer.
Having to pay several thousands of dollars for a top tier consumer GeForce when it was possible to do the same with only a few hundred dollars less than a decade ago is telling me the customer is being taken for a ride. It stinks.
I don't get this. Nvidia didn't "bet the farm" on AI. They are simply allocating limited resources (in this case memory) to their most profitable products. Yes, it sucks for gamers, but I see Nvidia more reacting to the current marketplace than driving that change.
If/when the AI bubble bursts, Nvidia will just readjust their resource allocation accordingly.
I also don't understand common sentiment that if/when the AI bubble pops and hardware manufacturers come crawling back, we consumers are going to make manufacturers regret their decision.
Isn't the whole problem that all the manufacturers are pivoting away from consumers and toward AI? How are we going to "hurt Nvidia in the pocketbook?" Buy from their competitors? But they are also making these pivots/"turning their backs on us." Just abstain from buying hardware out of protest? As soon as prices go down there's gonna be a buying frenzy from everyone who's been waiting this whole time.
If/when the bubble pops, manufacturers will find that they can't butter their bread like they could when the datacenter craze was booming. In a world that is paved by growth, companies aren't very good at shrinking.
It doesn't matter what consumers do or don't do -- we plebians are a tiny portion of their present market. We can buy the same GPUs from the same folks as before, or we can do something different, and it won't matter.
Whatever we do will be a rounding error in the jagged, gaping, infected hole where the AI market once was.
This is an even-handed take. I still think consumers in general should vote with their wallets, even if all of them put together won't hold a candle to their datacenter customers. If nothing else, it can grant the competition more market share, and maybe AMD and Intel can invest more into Radeon and Arc, respectively. That can only be a good thing, since I'd love to see more broad support for FSR and XeSS technologies on games, and ROCm and oneAPI for compute.
I certainly have no delusions of Nvidia going bankrupt. In fact, they will certainly make it to the other side without much issue. That said, I do foresee Nvidia taking a reputational hit, with AMD and (possibly) Intel gaining more mindshare among consumers.
When you have a CEO like Elon who swears up and down that you only need cameras for autonomous driving vehicles, and skimping out on crucial extras like Li-DAR, can anyone be surprised by this result? Tesla also likes to take the motto of "move fast and break things" to a fault.
I'm not surprised, more because there was info on Reddit that Tesla FSDs were having disengagements every 200 miles or some such in urban environments. Camera only probably could work in the future but seemingly not yet.
It's easy to argue that LIDAR is expensive and unnecessary, but radar has been a standard for luxury cruise control for decades so it's got a variety of OEM suppliers. Thus Tesla's lack of radar because of the CEO's ego is damnable. The problem with camera-only is fog. My human eyes don't penetrate fog. Radar does. Proving that camera-only is possible is stupid and ego driven and doesn't come from a place of merit and science and technology.
Musk's success story is taking very bold bets almost flippantly. These things have a premium associates with them, because to most people they are so toxic that they would never consider them.
Every time when he has the choice to do something conservative or bold, he goes for the latter, and so long as he has a bit of luck, that is very much a winning strategy. To most people, I guess the stress of always betting everything on red would be unbearable. I mean, the guy got a $300m cash payout in 1999! Hands up who would keep working 100 hour weeks for 26 years after that.
I'm not saying it is either bad or good. He clearly did well out of it for himself financially. But I guess the whole cameras/lidar thing is similar. Because it's big, bold, from the outset unlikely to work, and it's a massive "fake it till you make it" thing.
But if he can crack it, again I guess he hits the jackpot. Never mind cars, they are expensive enough that Lidar cost is a rounding error. But if he can then stick 3d vision into any old cheap cameras, surely that is worth a lot. In fact wasn't this part of Tesla's great vision - to diversify away from cars and into robots etc. I'm sure the military would order thousands and millions of cheapo cameras that work 90% as well as a fancy Lidar - while being fully solid state etc.
That he is using his clients as lab rats for it is yet another reason why I'm not buying one. But to me this is totally in character for Musk.
It rather reminds me of that Musk was obsessed with converting Paypal to run on Windows servers instead of Linux, and that he therefore finally got ousted by the other CEOs. Because he already had a big share in the company he made a lot of money. But he doesn't seem to be a clever engineer.
He's a complicated figure. He has done so much good as well. EVs in the US and reusable rockets owe a lot to him. OTOH, so does the cesspool that is X.
Bill Gates is still kickin'. There are credible independent estimates that his funding has saved tens of millions of lives that would've been lost to malaria, AIDS, and other diseases.
Effective altruism and other New Age garbage pseudo philosophy can't hold a candle to that.
In my opinion, one of the things that most reveals a person's biases and worldview is which tech oligarchs they revere and which they loathe
To reveal my own bias / worldview, I loathe and detest Bill Gates in nearly every way and have done so for over three decades. I think he has had a massively negative impact on humanity, mainly by making the computer industry so shitty for 4+ decades but in other more controversial areas as well.
With Elon Musk, while perceiving a number of big faults in the man, I also acknowledge that he has helped advance some very beneficial technologies (like electric vehicles and battery storage). So I have a mixed opinion on him, while with Gates, he is almost all evil and has had a massive negative impact on the planet.
I'm conflicted on this one. Famously, Tesla's main revenue source for ages was selling green credits to other car makers. Presumably, if not for Tesla, these car makers would have had to do something else.
The way I see it, he converted his cars' greenness into other people's fumes. So not a net win after all.
I just find it distracting to pretend like we know exactly what albatross to hang around the neck of the problem here. While I do tend to think lidar is probably useful, I also think this isn't a solved shut knowable case where the lidar absolutely is essential & makes all the difference. Making assertions like this rests more assurity than I think can be granted, and harms the overall idea: that Tesla doesn't seem to have serious proof that their things are getting better, that they are more trustworthy.
The data is just not there for us outsiders to make any kind of case, and thats the skimping out crucial baseline we need.
Completely self driving? Don't they go into a panic mode, stop the vehicle, then call back to a central location where a human driver can take remote control of the vehicle?
They've been seen doing this at crime scenes and in the middle of police traffic stops. That speaks volumes too.
Incorrect humans never take over the controls. An operator is presented with a set of options and they choose one, which the car then performs. The human is never in direct control of the vehicle. If this process fails then they send a physical human to drive the car.
> presented with a set of options and they choose one
> they send a physical human to drive the car.
Those all sound like "controls" to me.
"Fleet response can influence the Waymo Driver's path, whether indirectly through indicating lane closures, explicitly requesting the AV use a particular lane, or, in the most complex scenarios, explicitly proposing a path for the vehicle to consider. "
So they built new controls that typical vehicles don't have. Then they use them. I fail to see how any of this is "incorrect." It is, in fact, _built in_ to the system from the ground up.
Semantic games aside, it is obviously more incorrect to call them "completely self driving" especially when they "ask for help." Do human drivers do this while driving?
I don't know what you're trying to prove here. Stopping safely and waiting for human input in edge cases is fine (Waymo). Crashing into things is not fine (Tesla).
and there is a linked article about Waymo's data reporting, which is much more granular and detailed, whereas Tesla's is lumpy and redacted. Anyway, Waymo's data with more than 100M miles of self-supervised driving shows a 91% reduction in accidents vs humans. Tesla's is 10x the human accident rate according to the Austin data.
The more I've looked into the topic the less I think the removal of lidar was a cost issue. I think there are a lot of benefits to simplifying your sensor tech stack, and while I won't pretend to know the best solution removing things like lidar and ultrasonic sensors seem to have been a decision about improving performance. By doubling down on cameras your technical team can remain focused on certain sensor technology, and you don't have to deal with data priority and trust in the same way you do when you have a variety of sensors.
The only real test will be who creates the best product, and while waymo seems to have the lead it's arguably too soon to tell.
Having multiple sources of data is a benefit, not a problem. Entire signal processing and engineering domains exist to take advantage of this. Even the humble Kalyan filter lets you combine multiple noisy sources to get a more accurate result than would be possible if using any one source.
Kalman filters and more advanced aggregators add non-trivial latency. So even if one does not care about cost, there can be a drawback from having an extra sensor.
Cars and roads are built for human reaction times. That's why you have low speed limits in urban areas. You can have a pile of latencies attributable to processing a scene and still have superhuman reaction time that contributes to outperforming humans.
It's analogous to communications latency. High latencies are very annoying to humans, but below a threshold they stop mattering.
What I've heard out of Elon and engineers on the team is that some of these variations of sensors create ambiguity, especially around faults. So if you have a camera and a radar sensor and they're providing conflicting information, it's much harder to tell which is correct compared to just having redundant camera sensors.
I will also add in my personal experience, while some filters work best together (like imu/gnss), we usually either used lidar or camera, not both. Part of the reason was combining them started requiring a lot more overhead and cross-sensor experts, and it took away from the actual problems we were trying to solve. While I suppose one could argue this is a cost issue (just hire more engineers!) I do think there's value in simplifying your tech stack whenever possible. The fewer independent parts you have the faster you can move and the more people can become an expert on one thing
Again Waymo's lead suggests this logic might be wrong but I think there is a solid engineering defense for moving towards just computer vision. Cameras are by far the best sensor, and there are tangible benefits other than just cost.
From my previous comment, in case you didn't see it
> Again Waymo's lead suggests this logic might be wrong but I think there is a solid engineering defense for moving towards just computer vision. Cameras are by far the best sensor, and there are tangible benefits other than just cost.
Fanboy or not, we don't know how much Waymo's model relies on an army of contractors labeling every stop light, sign, and trash can that, sure, they're using LIDAR to detect them and not cameras. We also don't know much about Tesla's Robotaxi initiative and how much human help they're relying on either.
In our case if we're spending a lot of time on something that doesn't improve the product, it just takes away from the product. Like if we put 800 engineering hours into sensor fusion and lidar when the end product doesn't become materially better, we could have placed those 800 hours towards something else which makes the end product better.
It's not that we ran into problems, it's that the tech didn't deliver what we hoped when we could have used the time to build something better.
To tell what? Waymo is easily 5 years ahead of the tech alone, let alone the roll out of autonomous service. They may eventually catch up but they are definitely behind.
Sure, but Tesla is already losing the race. They were ahead a few years ago, but not anymore. They bet in getting autonomous driving done with cameras only that are cheap and have a simple and will understood tech stack and ecosystem.
It didn't work out though and now multi sensor systems are eating their lunch.
Honestly I think it's more that he was backed into a corner. The Teslas from ~9 years ago when they first started selling "full self driving" as an option, had some OK cameras and, by modern standards, a very crappy radar.
The radar they had really couldn't detect stationary objects. It relied on the doppler effect to look for moving objects. That would work most of the time, but sometimes there would be a stationary object in the road, and then the computer vision system would have to make a decision, and unfortunately in unusual situations like a firetruck parked at an angle to block off a crash site, the Tesla would plow into the firetruck.
Given that the radar couldn't really ever be reliable enough to create a self driving vehicle, after he hired Karpathy, Elon became convinced that the only way to meet the promise was to just ignore the radar and get the computer vision up to enough reliability to do FSD. By Tesla's own admission now, the hardware on those 2016+ vehicles is not adequate to do the job.
All of that is to say that IMO Elon's primary reason for his opinions about Lidar are simply because those older cars didn't have one, and he had promised to deliver FSD on that hardware, and therefore it couldn't be necessary, or he'd go broke paying out lawsuits. We will see what happens with the lawsuits.
Seems to me rather that Teslas are self driving cars with a handicap; they are missing some easily obtainable data because they lack the sensors. Because their CEO is so hard headed.
Simplifying things doesn't always make things easier.
Usually you would go in with the max amount of sensors and data, make it work and then see what can be left out. It seems dumb to limit yourself from beginning if you don’t know yet what really works. But then I am not a multi billionaire so what do i know?
Well we know that vision works based on human experience. So few years ago it was a reasonable bet that cameras alone could solve this. The problem with Tesla is that they still continue to insist on that after it became apparent that vision alone with the current tech and machine learning does not work. They even do not want to use a radar again even if the radar does not cost much and is very beneficial for safety.
Human vision is terrible in conditions like fog, rain, snow, darkness and many others. Other sensor types would do much better there. They should have known that a long time ago.
> Well we know that vision works based on human experience.
Actually, we know that vision alone doesn't work.
Sun glare. Fog. Whiteouts. Intense downpours. All of them cause humans to get into accidents, and electronic cameras aren't even as good as human eyes due to dynamic range limitations.
Dead reckoning with GPS and maps are a huge advantage that autonomous cars have over humans. No matter what the conditions are, autonomous cars know where the car is and where the road is. No sliding off the road because you missed a turn.
Being able to control and sense the electric motors at each wheel is a big advantage over "driving feel" from the steering wheel and your inbuilt sense of acceleration.
Radar/lidar is just all upside above and beyond what humans can do.
This is a solved problem. Many people I know including myself use Waymo’s on a weekly basis. They are rock solid. Waymo has pretty unequivocally solved the problem. There is no wait and see.
I mean if waymo had unequivocally solved the problem the country would be covered in them, and the only limit to their expansion would be how many cars they can produce. Currently they're limited by how quickly they can train on new areas, which is likely slowed by the fact they're using over 20 sensors across four different types. On the other hand, Tesla could spread across the country tomorrow if they were reliable enough. I would think solving autonomous driving would imply you could go nation wide with your product
>The only real test will be who creates the best product, and while waymo seems to have the lead it's arguably too soon to tell.
Price is a factor. I’ve been using the free self driving promo month on my model Y (hardware version 4), and it’s pretty nice 99% of the time.
I wouldn’t pay for it, but I can see a person with more limited faculties, perhaps due to age, finding it worthwhile. And it is available now in a $40k vehicle.
It’s not full self driving, and Waymo is obviously technologically better, but I don’t think anyone is beating Tesla’s utility to price ratio right now.
I do see 2.5 GbE NICs really making huge headway just within the past few years alone. Even lower end AMD and Intel motherboards are including 2.5 GbE ethernet ports by default these days, and that standard has the advantage of using the same copper RJ45 cabling, while giving you a theoretical 2.5x gain in speeds. For many users, even including myself, 2.5 GbE is a fantastic leap forward without having to dump money into fancier gear in order to properly take advantage of something faster like 10 GbE networking.
Yes and no. HDMI CEC works pretty decent these days, all the kinks have been worked out over the years and the only time it bugs out is if you use Chinese brands (looking at you, TCL) that write horrid firmware and never fix any bugs found after release.
Displayport has DDC/CI, which allows you to adjust things like brightness, volume, etc. remotely. This has existed since the DVI era (!) which means Displayport had a huge headstart. But they never formalized and enforced the DDC/CI spec, which means every monitor has extremely weird quirks. Some will allow you to send and read data. Some will only allow you to send data and crash when you try to read. Some will update only once every few seconds.
Although in this specific case, one wonders why Valve didn't just use two Displayport 1.4 ports and and stuck an onboard HDMI converter in front of one of them, sourced from a company that would be amenable to having Valve work on the firmware of said converter. Make the entire firmware of the converter open source except for the binary blob that handles the Displayport 1.4 -> HDMI 2.1 bits.
Hopefully Valve does this but sells it as a external, high quality converter. It would be a nice little plus even for non-Steam Machine owners, same way like Apple's USB-C to 3.5mm convertor is the highest quality mini DAC on the market for the low price of €10.
Funny enough... HDMI CEC is still not perfect in my experience. For the longest time, if I powered on my Mac mini and not power on the TV manually, it would actually cause the TV to crash and force a reboot. It was really strange behavior.
> HDMI CEC works pretty decent these days, all the kinks have been worked out over the years and the only time it bugs out is if you use Chinese brands
I don't know. I have an LG TV and it does not support turning the display on/off with HDMI CEC. Everything else seems to work but it intentionally ignores those commands.
Brightness control on external monitors has never been supported in Windows though, partially due to issues with displays that have poor write endurance on internal storage.
It might not be an "internal windows" tool, but I have controlled an ancient monitor (I think over VGA?) using a 3rd part app on windows. The buttons had broken, but software control worked just fine.
In fact I’ve used a 100 foot fiber optic DisplayPort cable that I “just bought” on Amazon, admittedly for a LOT of money (like, I think it was about $100 USD, 3 years ago or so).
I just wish they sold the transceivers separately from the fiber. Being able to use any random length of cheap off-the-shelf SMF/MMF fiber would be so much more convenient than having to get a custom one-off cable.
They exist for medium-speed HDMI (see for example [0]), but I haven't seen them for modern high-speed DP yet.
Huh, I thought I had mine earlier. Mine was from May 2021. They were very very new and had very few reviews, and it was $56. For a 100' fiber optic cable that promised 8k60 and was light.
This cable is absurdly long. I have no idea how to coil it nicely. At my last place I had three stories, and would sometimes just dangle most of it down to the ground then wind it up from the roof.
I hate noise from the PC, so I've sited my PC under the desk at the opposite end of the room to where I sit (so about 3.5m away). I have a pair of 5m DP cables running to my 2 ultrawide monitors without any problems at all, so it seems if you buy decent cables it just works with DP too.
The only potential issue is that they seem to be slow waking up from sleep. I've never been interested enough to investigate if moving the PC closer with shorter cables fixes that, or whether it's just an issue with having 2 monitors. I think the underlying cause is actually just because it's Windows and that one monitor (they're supposed to be identical) seems to wake up earlier than the other, so it briefly flashes on, then goes black while it reconfigures for 2 screens and then on again.
But anyway, my 5m cable runs seem fine. They weren't especially expensive nor especially cheap cables, IIRC around 10-15 GBP each.
Indeed. I'm pretty sure the issue is that the HDMI Consortium wants some kind of royalty for each device sold with a proper HDMI designation, whereas VESA doesn't care if you sell one device or a million devices with DisplayPort. You owe them nothing extra beyond the initial legal access fee.
Oh yeah, and the burdensome NDA that the HDMI Consortium requires its partners to agree to is another serious problem for the Linux driver.
By older Arc, I presume you're referring to Alchemist and not Battlemage in this case.
One of my brothers has a PC I built for him, specced out with an Intel Core i5 13400f CPU and an Intel Arc A770 GPU, and it still works great for his needs in 2025.
Surely, Battlemage is more efficient and more compatible in some ways over Alchemist. But if you keep your expectations in check, it will do just fine in many scenarios. Just avoid any games using Unreal Engine 5.
yeah i had an A770; it should be ~$200-$250 now on ebay, lightly used. It's, in my opinion, worth about $200, if it's relatively unused. As i mentioned, it's ~= RTX 3060 at least for compute loads, and the 16GB is nice to have for that. But for a computer from the 4th gen i'd probably only get a A380 or A580; the A380 is $60-$120 on ebay.
Bill also ended the strip exactly ten years after he started. Frankly, I appreciated his honesty and integrity around the strip. The comic stuck around just long enough to leave a lasting impression but never too long to overstay its welcome.
Other than Ultima 7 I only know of Zone66 as another game that used Unreal mode. Early versions also didn't like v86 mode but later versions added support for DPMI or in some other way started to play nicely with EMM386
In your specific case too, I believe there are options for soft emulation of the FPU if you needed support for one in a pinch. I can’t say how the performance is, but I’d imagine it would be insanely slow.
I hope gamers, systems integrators, and regular PC enthusiasts don't have memories of goldfish and go back to business as usual. It needs to hurt Nvidia in the pocketbook.
Will this happen? Unlikely, but hope springs eternal.
reply