What doesn’t make sense to me is that the cameras are no where as good as human eyes. The dynamic range sucks, it doesn’t put down a visor or where sunglasses to deal with beaming light, resolution is much worse, etc. why not invest in the cameras themselves if this is your claim?
I always see this argument but from experience I don't buy it. FSD and its cameras work fine driving with the sun directly in front of the car. When driving manually I need the visor so far down I can only see the bottom of the car in front of me.
The cameras on Teslas only really lose visibility when dirty. Especially in winter when there's salt everywhere. Only the very latest models (2025+?) have decent self-cleaning for the cameras that get very dirty.
For which car? The older the car (hardware) version the worse it is. I've never had any front camera blinding issues with a 2022 car (HW3).
The thing to remember about cameras is what you see in an image/display is not what the camera sees. Processing the image reduces the dynamic range but FSD could work off of the raw sensor data.
It doesn't run well on HW3 at all. HW4 has significantly better FSD when running comparable versions (v14). The software has little to do with the front camera getting blinded though.
Cerebras has effectively 100% yield on these chips. They have an internal structure made by just repeating the same small modular units over and over again. This means they can just fuse off the broken bits without affecting overall function. It's not like it is with a CPU.
I suggest to read their website, they explain pretty well how they manage good yield. Though I’m not an expert in this field. I does make sense and I would be surprised if they were caught lying.
Defects are best measured on a per-wafer basis, not per-chip. So if if your chips are huge and you can only put 4 chips on a wafer, 1 defect can cut your yield by 25%. If they're smaller and you fit 100 chips on a wafer, then 1 defect on the wafer is only cutting yield by 1%. Of course, there's more to this when you start reading about "binning", fusing off cores, etc.
There's plenty of information out there about how CPU manufacturing works, why defects happen, and how they're handled. Suffice to say, the comment makes perfect sense.
That's why you typically fuse off defective sub-units and just have a slightly slower chip. GPU and CPU manufacturers have done this for at least 15 years now, that I'm aware of.
Sure it does. If it’s many small dies on a wafer, then imperfections don’t ruin the entire batch; you just bin those components. If the entire wafer is a single die, you have much less tolerance for errors.
They can be made from large wafers. A defect typically breaks whatever chip it's on, so one defect on a large wafer filled with many small chips will still just break one chip of the many on the wafer. If your chips are bigger, one defect still takes out a chip, but now you've lost more of the wafer area because the chip is bigger. So you get a super-linear scaling of loss from defects as the chips get bigger.
With careful design, you can tolerate some defects. A multi-core CPU might have the ability to disable a core that's affected by a defect, and then it can be sold as a different SKU with a lower core count. Cerebras uses an extreme version of this, where the wafer is divided up into about a million cores, and a routing system that can bypass defective cores.
There’s an expected amount of defects per wafer. If a chip has a defect, then it is lost (simplification). A wafer with 100 chips may lose 10 to defects, giving a yield of 90%. The same wafer but with 1000 smaller chips would still have lost only 10 of them, giving 99% yield.
As another comment referenced in this thread states, Cerebras seems to have solved by making their big chip a lot of much smaller cores that can be disposed of if they have errors.
Indeed, the original comment you replied to actually made no sense in this case. But there seemed to be some confusion in the thread, so I tried to clear that up. I hope I’ll get to talk with one of the cerebras engineers one day, that chip is really one of a kind.
Forces you to read after every write. E.g. you edit line 15 to be two lines. Then now you need arithmetic for later vs earlier lines or you need to read full file to reindex by line number.
we dug into those sorts of questions with hypertokens, a robust hash for lines, code, tables/rows or any in-context token tagging to give models photographic memory
one mechanism we establish is that each model has a fidelity window, i.e., r tokens of content for s tag tokens; each tag token adds extra GUID-like marker capacity via its embedding vector; since 1,2,3 digit numbers only one token in top models, a single hash token lacks enough capacity & separation in latent space
we also show hash should be properly prefix-free, or unique symbols perp digit, e.g., if using A-K & L-Z to hash then A,R is legal hash whereas M,C is not permitted hash
we can do all this & more rather precisely as we show in our arXiv paper on same; next update goes deeper into group theory, info theory, etc. on boosting model recall, reasoning, tool calls, etc. by way of robust hashing
The author writes that these hashes are 2 or 3 characters long. I assume depending on the line count. That's good for almost 48k lines. You have other issues then.
I don't doubt it, but what were they all doing? The Metaverse had 10k employees on it for multiple years and seemed to almost be a standstill for long periods of time. What do these massive teams do all day?
Have meetings to figure out how to interact with the other 9990 employees. Then try and make the skeleton app left behind by the team of transient engineers who left after 18 months before moving on to their next gig work, before throwing it out and starting again from scratch.
Exactly. What Meta accomplished could have been done by a team of less than 40 mediocre engineers. It’s really just not even worth analyzing the failure. I am in complete awe when I think about how bad the execution of this whole thing was. It doesn’t even feel real.
Actually I would like see a post-mortem that showed where all the money actually went; they somehow spent ~85x of what RSI has raised for Star Citizen, and what they had to show for it was worse than some student projects I've seen.
Were they just piling up cash in the parking lot to set it on fire?
At least part of the funding went to research on hard science related to VR, such as tracking, lenses, CV, 3D mapping etc. And it paid off, IMO Meta has the best hardware and software foundation for delivering VR, and projects like Hyperscape (off-the-shelf, high-fidelity 3D mapping) are stunning.
Whether it was worth it is another question, but I would not be surprised is recycled to power a futuristic AI interface or something similar at some point.
Big company syndrome has existed for a long time. It’s almost impossible to innovate or move fast with 8 levels of management and bloated codebases. That’s why startups exist.
"It was in the reign of George III that the aforesaid personages lived and quarrelled; good or bad, handsome or ugly, rich or poor, they are all equal now."
I’ve had software issues on an ID4 and iX, but I’ve never had reliability problems. The cars always have just worked with no maintenance. Same with my model Y, minus any issues!
Maybe it’s a “but when it happens you’re screwed” situation. I’m thinking of the story of BMW’s battery safety fuse (the one that trips in an accident to protect first responders and the people in the car) actually tripping when you hot the curb or a pothole harder. It requires a very expensive trip to the dealer. Some of my Tesla owning friends keep spending time in the shop getting something about the suspension fixed 2-3 times already.
I have no idea if Chinese EVs are consistently better, Volvo can be seen as one and I don’t think they excel at reliability lately.
P.S. Software issues are reliability issues. The software is a core part of the car and its value proposition, you can’t discount them as “just software issues, not reliability”.
> Some of my Tesla owning friends keep spending time in the shop getting something about the suspension fixed 2-3 times already.
They're pretty lucky from what I hear! A friend of mine just sold his Model S because he'd been waiting over 7 months for the shop to source a replacement part. Apparently he'd even resorted to begging Musk to look into it over X because Tesla wont even give him an ETA.
iD4 feels like they took every lesson of predictable UX design and then intentionally reversed it to make the most frustrating UI possible.
The window controls, touch buttons, screen, steering wheel controls, etc. They all seem designed to answer the question, "how could we make this unnecessarily difficult and distracting to use? How could we possibly cram in yet another State Machine for the user to keep (lose) track of?"
It also has the "try to kill the asthmatic by randomly switching off recirculate while driving through dense wood smoke" feature, naturally.
Considering how much money VW makes on EVs[0], I suppose I'm not surprised by this 'nudge' toward gas cars.
So it doesn’t matter at all except to your sensibilities. Sounds to me that they simply are much better at prioritisation than your average HN user, who’d have taken forever to release it but at least the terminal interface would be snappy…
Aside from startup time, as a tool Claude Code is tremendous. By far the most useful tool I’ve encountered yet. This seems to be very nit picky compared to the total value provided. I think y'all are missing the forrest for the trees.
Most of the value of Claude Code comes from the model, and that's not running on your device.
The Claude Code TUI itself is a front end, and should not be taking 3-4 seconds to load. That kind of loading time is around what VSCode takes on my machine, and VSCode is a full blown editor.
The humans in the company (correctly) realised that a few seconds to open basically the most powerful productivity agent ever made so they can focus on fast iteration of features is a totally acceptable trade off priority wise. Who would think differently???
reply