Hacker Newsnew | past | comments | ask | show | jobs | submit | azinman2's commentslogin

What doesn’t make sense to me is that the cameras are no where as good as human eyes. The dynamic range sucks, it doesn’t put down a visor or where sunglasses to deal with beaming light, resolution is much worse, etc. why not invest in the cameras themselves if this is your claim?

I always see this argument but from experience I don't buy it. FSD and its cameras work fine driving with the sun directly in front of the car. When driving manually I need the visor so far down I can only see the bottom of the car in front of me.

The cameras on Teslas only really lose visibility when dirty. Especially in winter when there's salt everywhere. Only the very latest models (2025+?) have decent self-cleaning for the cameras that get very dirty.


FSD doesn't "work fine" driving directly into the sun. There are loads of YT videos that demonstrate this.

For which car? The older the car (hardware) version the worse it is. I've never had any front camera blinding issues with a 2022 car (HW3).

The thing to remember about cameras is what you see in an image/display is not what the camera sees. Processing the image reduces the dynamic range but FSD could work off of the raw sensor data.


Nobody cares that you think v14.7.22b runs well on HW3.1. Literally nobody.

It doesn't run well on HW3 at all. HW4 has significantly better FSD when running comparable versions (v14). The software has little to do with the front camera getting blinded though.

Especially the part where the cameras do not meet minimum vision requirements [1] in many states where it operates such as California and Texas.

[1] https://news.ycombinator.com/item?id=43605034


The bigger the chip, the worse the yield.

Cerebras has effectively 100% yield on these chips. They have an internal structure made by just repeating the same small modular units over and over again. This means they can just fuse off the broken bits without affecting overall function. It's not like it is with a CPU.

I suggest to read their website, they explain pretty well how they manage good yield. Though I’m not an expert in this field. I does make sense and I would be surprised if they were caught lying.

This comment doesn't make sense.

One wafer will turn into multiple chips.

Defects are best measured on a per-wafer basis, not per-chip. So if if your chips are huge and you can only put 4 chips on a wafer, 1 defect can cut your yield by 25%. If they're smaller and you fit 100 chips on a wafer, then 1 defect on the wafer is only cutting yield by 1%. Of course, there's more to this when you start reading about "binning", fusing off cores, etc.

There's plenty of information out there about how CPU manufacturing works, why defects happen, and how they're handled. Suffice to say, the comment makes perfect sense.


That's why you typically fuse off defective sub-units and just have a slightly slower chip. GPU and CPU manufacturers have done this for at least 15 years now, that I'm aware of.

Sure it does. If it’s many small dies on a wafer, then imperfections don’t ruin the entire batch; you just bin those components. If the entire wafer is a single die, you have much less tolerance for errors.

Although, IIUC, Cerebras expects some amount of imperfection and can adjust the hardware (or maybe the software) to avoid those components after they're detected. https://www.cerebras.ai/blog/100x-defect-tolerance-how-cereb...

You can just do dynamic binning.

Bigger chip = more surface area = higher chance for somewhere in the chip to have a manufacturing defect

Yields on silicon are great, but not perfect


Does that mean smaller chips are made from smaller wafers?

They can be made from large wafers. A defect typically breaks whatever chip it's on, so one defect on a large wafer filled with many small chips will still just break one chip of the many on the wafer. If your chips are bigger, one defect still takes out a chip, but now you've lost more of the wafer area because the chip is bigger. So you get a super-linear scaling of loss from defects as the chips get bigger.

With careful design, you can tolerate some defects. A multi-core CPU might have the ability to disable a core that's affected by a defect, and then it can be sold as a different SKU with a lower core count. Cerebras uses an extreme version of this, where the wafer is divided up into about a million cores, and a routing system that can bypass defective cores.

They have a nice article about it here: https://www.cerebras.ai/blog/100x-defect-tolerance-how-cereb...


Nope. They use the same size wafers and then just put more chips on a wafer.

So, does a wafer with a huge chip has more defects per area than a wafer with 100s of small chips?

There’s an expected amount of defects per wafer. If a chip has a defect, then it is lost (simplification). A wafer with 100 chips may lose 10 to defects, giving a yield of 90%. The same wafer but with 1000 smaller chips would still have lost only 10 of them, giving 99% yield.

As another comment referenced in this thread states, Cerebras seems to have solved by making their big chip a lot of much smaller cores that can be disposed of if they have errors.

Indeed, the original comment you replied to actually made no sense in this case. But there seemed to be some confusion in the thread, so I tried to clear that up. I hope I’ll get to talk with one of the cerebras engineers one day, that chip is really one of a kind.

You say this with such confidence and then ask if smaller chips require smaller wafers.

Amazing. I just ordered some. I hope someone fulfills it?!

Why not just use line numbers?

I was wondering the same thing.

Forces you to read after every write. E.g. you edit line 15 to be two lines. Then now you need arithmetic for later vs earlier lines or you need to read full file to reindex by line number.

Good point!

I just wonder how unique these hashes will be if only 2 characters. It seems like the collision rate would be really high.


we dug into those sorts of questions with hypertokens, a robust hash for lines, code, tables/rows or any in-context token tagging to give models photographic memory

one mechanism we establish is that each model has a fidelity window, i.e., r tokens of content for s tag tokens; each tag token adds extra GUID-like marker capacity via its embedding vector; since 1,2,3 digit numbers only one token in top models, a single hash token lacks enough capacity & separation in latent space

we also show hash should be properly prefix-free, or unique symbols perp digit, e.g., if using A-K & L-Z to hash then A,R is legal hash whereas M,C is not permitted hash

we can do all this & more rather precisely as we show in our arXiv paper on same; next update goes deeper into group theory, info theory, etc. on boosting model recall, reasoning, tool calls, etc. by way of robust hashing


For others, here's the paper: https://arxiv.org/abs/2507.00002

The author writes that these hashes are 2 or 3 characters long. I assume depending on the line count. That's good for almost 48k lines. You have other issues then.

But if it’s a hash vs a line number, then we can collide much more easily.

There many be many lines that are duplicates, eg “{“


The irony is if you take this to the limit, ChatGPT replaces fiverr

Siri has had many, many, many engineers on it for a while.

I don't doubt it, but what were they all doing? The Metaverse had 10k employees on it for multiple years and seemed to almost be a standstill for long periods of time. What do these massive teams do all day?

Have meetings to figure out how to interact with the other 9990 employees. Then try and make the skeleton app left behind by the team of transient engineers who left after 18 months before moving on to their next gig work, before throwing it out and starting again from scratch.

Exactly. What Meta accomplished could have been done by a team of less than 40 mediocre engineers. It’s really just not even worth analyzing the failure. I am in complete awe when I think about how bad the execution of this whole thing was. It doesn’t even feel real.

Actually I would like see a post-mortem that showed where all the money actually went; they somehow spent ~85x of what RSI has raised for Star Citizen, and what they had to show for it was worse than some student projects I've seen.

Were they just piling up cash in the parking lot to set it on fire?


At least part of the funding went to research on hard science related to VR, such as tracking, lenses, CV, 3D mapping etc. And it paid off, IMO Meta has the best hardware and software foundation for delivering VR, and projects like Hyperscape (off-the-shelf, high-fidelity 3D mapping) are stunning.

Whether it was worth it is another question, but I would not be surprised is recycled to power a futuristic AI interface or something similar at some point.


Big company syndrome has existed for a long time. It’s almost impossible to innovate or move fast with 8 levels of management and bloated codebases. That’s why startups exist.

I wish I could be assigned a project and make no progress in over a decade and still have a job.

Been there, done that, would not recommend. Working on such projects is incredibly frustrating and demoralizing.

Why would you want to be forgotten?

We will all be forgotten given enough timeframe.

"It was in the reign of George III that the aforesaid personages lived and quarrelled; good or bad, handsome or ugly, rich or poor, they are all equal now."

What’s 100x better about the TUI?

Doesn’t Claude code have an agents sdk that officially allows you to use the good parts?

Yes but you can't use a subscription with that

I’ve had software issues on an ID4 and iX, but I’ve never had reliability problems. The cars always have just worked with no maintenance. Same with my model Y, minus any issues!

Maybe it’s a “but when it happens you’re screwed” situation. I’m thinking of the story of BMW’s battery safety fuse (the one that trips in an accident to protect first responders and the people in the car) actually tripping when you hot the curb or a pothole harder. It requires a very expensive trip to the dealer. Some of my Tesla owning friends keep spending time in the shop getting something about the suspension fixed 2-3 times already.

I have no idea if Chinese EVs are consistently better, Volvo can be seen as one and I don’t think they excel at reliability lately.

P.S. Software issues are reliability issues. The software is a core part of the car and its value proposition, you can’t discount them as “just software issues, not reliability”.


> Some of my Tesla owning friends keep spending time in the shop getting something about the suspension fixed 2-3 times already.

They're pretty lucky from what I hear! A friend of mine just sold his Model S because he'd been waiting over 7 months for the shop to source a replacement part. Apparently he'd even resorted to begging Musk to look into it over X because Tesla wont even give him an ETA.


I really wanted to like the iD4.

iD4 feels like they took every lesson of predictable UX design and then intentionally reversed it to make the most frustrating UI possible.

The window controls, touch buttons, screen, steering wheel controls, etc. They all seem designed to answer the question, "how could we make this unnecessarily difficult and distracting to use? How could we possibly cram in yet another State Machine for the user to keep (lose) track of?"

It also has the "try to kill the asthmatic by randomly switching off recirculate while driving through dense wood smoke" feature, naturally.

Considering how much money VW makes on EVs[0], I suppose I'm not surprised by this 'nudge' toward gas cars.

[0] https://www.motor1.com/news/758377/vw-making-less-money-sell...


Why does it matter if Claude Code opens in 3-4 seconds if everything you do with it can take many seconds to minutes? Seems irrelevant to me.

I guess with ~50 years of CPU advancements, 3-4 seconds for a TUI to open makes it seem like we lost the plot somewhere along the way.

Don’t forget they’ve also publicly stated (bragged?) about the monumental accomplishment of getting some text in a terminal to render at 60fps.

So it doesn’t matter at all except to your sensibilities. Sounds to me that they simply are much better at prioritisation than your average HN user, who’d have taken forever to release it but at least the terminal interface would be snappy…

Some people[0] like their tools to be well engineered. This is not unique to software.

[0] Perhaps everyone who actually takes pride in their craft and doesn’t prioritise shitty hustle culture and making money over everything else.


Aside from startup time, as a tool Claude Code is tremendous. By far the most useful tool I’ve encountered yet. This seems to be very nit picky compared to the total value provided. I think y'all are missing the forrest for the trees.

Most of the value of Claude Code comes from the model, and that's not running on your device.

The Claude Code TUI itself is a front end, and should not be taking 3-4 seconds to load. That kind of loading time is around what VSCode takes on my machine, and VSCode is a full blown editor.


It’s orders of magnitude slower than Helix, which is also a full blown editor.

When all your other tools are fast and well engineered, slow and bloated is very noticeable.


It’s almost all the model. There are many such tools and Claude Code doesn’t seem to be in any way unique. I prefer OpenCode, so far.

Because when the agent is taking many seconds to minutes, I am starting new agents instead of waiting or switching to non-agent tasks

This is exactly the type of thing that AI code writers don't do well - understand the prioritization of feature development.

Some developers say 3-4 seconds are important to them, others don't. Who decides what the truth is? A human? ClawdBot?


The humans in the company (correctly) realised that a few seconds to open basically the most powerful productivity agent ever made so they can focus on fast iteration of features is a totally acceptable trade off priority wise. Who would think differently???

This is my point...

You kinda suggested the opposite

> Some developers say 3-4 seconds are important to them, others don't.

Wasnt GTA 5 famous for very long start up time and turns out there some bug which some random developer/gamer found out and gave them a fix?

Most Gamers didnt care, they still played it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: