Hacker Newsnew | past | comments | ask | show | jobs | submit | microtherion's commentslogin

> Not only is the moon further, you also need to use more fuel to land on it

And take off again, if reusable spacecraft are meant to be used.


I'm quite skeptical of Tesla's reliability claims. But for exactly that reason, I welcome a company like Lemonade betting actual money on those claims. Either way, this is bound to generate some visibility into the actual accident rates.

One thing that was unclear to me from the stats cited on the website is whether the quoted 52% reduction in crashes is when FSD is in use, or overall. This matters because people are much more likely to use FSD in situations where driving is easier. So, if the reduction is just during those times, I'm not even sure that would be better than a human driver.

As an example, let's say most people use FSD on straight US Interstate driving, which is very easy. That could artificially make FSD seem safer than it really is.

My prior on this is supervised FSD ought to be safer, so the 52% number kind of surprised me, however it's computed. I would have expected more like a 90-95% reduction in accidents.


I think this might be right, but it does two interesting things:

1) it let's lemonade reward you for taking safer driving routes (or living in a safer area to drive, whatever that means)

2) it (for better or worse) encourages drivers to use it more. This will improve Tesla's training data but also might negatively impact the fsd safety record (an interesting experiment!)


> ...but also might negatively impact the fsd safety record (an interesting experiment!)

As a father of kids in a neighborhood with a lot of Teslas, how do I opt out of this experiment?


Do your kids randomly run into the road? I was worried about that but then mine just don’t run into the road for some reason, they are quite careful about it seemingly by default after having “getting bumped into by a car” explained to them. I’m not sure if this is something people are just paranoid about because the consequences are so bad or if some kids really do just run out into the road randomly.

Some kids really do just run into the road seemingly randomly. Other kids run in with a clear purpose, not at all randomly, and sometimes (perhaps very rarely, but it only takes once and bad luck) forget to look both ways. Kids are not cookie cutter copies that all behave the same way in the same circumstances (even with the same training).

> Some kids really do just run into the road seemingly randomly. ... sometimes (perhaps very rarely, but it only takes once and bad luck) forget to look both ways.

Just this week I was telling my law school contract-drafting class that part of our job as lawyers and drafters is to try to to "child-proof" our contracts, because sometimes clients' staff understandably don't fully appreciate the possible consequences of 'running into the street,' no matter how good an idea it might seem at the time.


I'm more worried about the Teslas hitting my kids when they're on bicycles or Teslas swerving off the road into the yards. Regardless, it sure would be nice if technology controlling multi-ton vehicles on public roads were subject to regulations, or at least had clearly define liability.

Kids will randomly run into the road. They might run behind a ball or a dog so that it doesn’t end up on the other side or runned over or are simply too excited to remember your stern road safety talk.

The first thing I was taught when I picked up a car was: if you see a ball on the road you stop immediately. This valuable lesson has saved one kid (and my sanity) with me on the wheel.


This guy couldn't follow that rule https://www.youtube.com/watch?v=7E_FtC1BLH0

Yes it does happen. Otherwise smart kids will do dumb stuff sometimes. Like see their friend across the road, but at that moment someone on a motorcycle is accelerating out of their driveway, kid runs across, dead


Same way you opt out of having drunk drivers drive home along your street and pass out while driving, or drivers getting a stroke or other blood clot while driving and crashing into parked cars.

The insurance industry is a commercial prediction market.

It is often an indicator of true honesty, providing there is no government intervention. Governments intervene in insurance/risk markets when they do not like the truth.

I tried to arrange insurance for an obese western expatriate several years ago in an Asian country, and the (western) insurance company wrote a letter back saying the client was morbidly obese and statistically likely to die within 10 years, and they should lose x weight before they could consider having insurance.


I could see prediction markets handing insurance in the future, it could probably get fairer prices but would have to be done right to avoid bad incentives, interesting to think about how that might work.

> providing there is no government intervention.

You mean like forcing people to buy it ad then shaping what product can ad cant be offered with a spiderweb of complex rules?


The clearest example is the state of California preventing insurance companies from increasing annual premium when risks increase. Please understand I have no political opinion about this. As a result, a lot of insurers have completely withdrawn and now its not possible to insure houses properly for many people.

https://www.theguardian.com/us-news/2023/may/27/state-farm-h...

With no government intervention, the price of all fire insurance in California would increase materially to reflect the genuine risk of wildfire damage.


> quite skeptical of Tesla's reliability claims

I'm sceptical of Robotaxi/Cybercab. I'm less sceptical that FSD, supervised, is safer than fully-manual control.


Where I live isn't particularly challenging to drive (rural Washington), but I'm constantly disengaging FSD for doing silly and dangerous things.

Most notably my driveway meets the road at a blind y intersection, and my Model 3 just blasts out into the road even though you cannot see cross traffic.

FSD stresses me out. It's like I'm monitoring a teenager with their learners permit. I can probably count the number trips where I haven't had to take over on one hand.


> I'm constantly disengaging FSD for doing silly and dangerous things.

You meant “I disable FSD because it does silly things”

I read “I disable FSD so I can do silly things”


Exactly. Every bad situation I’ve been in with FSD was when I misread the situation and disengaged it during a maneuver that it was handling safely

It feels unlikely that blindly entering cross traffic, as described in the previous post, is going to be a safe maneuver, though.

I use it for 90% of my driving in Austin and it’s incredible

Do you have HW3 or HW4?

The newest FSD on HW4 was very good in my opinion. Multiple 45min+ drives where I don’t need to touch the controls.

Still not paying $8k for it. Or $100 per month. Maybe $50 per month.


It's your sanity (and money) ¯\_(ツ)_/¯

HW3, unfortunately. Missed the HW4 refresh by a couple of months.

it's edging into the intersection to get a better view on the camera. it's further than you would normally pull out, but it will NOT pull into traffic.

It's not edging; it enters the street going a consistent speed (usually >10mph) from my driveway. The area is heavily wooded, and I don't think it "sees" the cross direction until it's already in the road. Or perhaps the lack of signage or curb make it think it has the right of way.

My neighbor joked that I should install a stop sign at the end of my driveway to make it safer.


Or just manually drive in your own driveway.

The fact that it does't handle some specific person's driveway well is far from a condemnation of the system. I'm far more concerned about it mishandling things on "proper" roads at speed.


The software probably has a better idea of their car’s dimensions than a human driver, so will be able to get a better view of traffic by pulling out at just the right distance.

Having handed over control of my vehicles to FSD many times, I’ve yet to come away from the experience feeling that my vehicle was operating in a safer regime for the general public than within my own control.

Keeping a 1-2 car's length stopping distance is likely over a 50% reduction in at fault damages.

You can get this with just a fairly dumb radar cruise control system, though.

I think you greatly overestimate humans

The problem IMO is the transition period. A mostly safe system will make the driver feel at ease, but when an emergency occurs and the driver must take over, it's likely that they won't be paying full attention.

We aren’t talking about the average human here.

On average you include sleep deprived people, driving way over the speed limit, at night, in bad weather, while drunk, and talking to someone. FSD is very likely situationally useful.

But you can know most of those adverse conditions don’t apply when you engage FSD on a given trip. As such the standard needs to be extremely high to avoid increased risks when you’re sober, wide awake, the conditions are good, and you have no need to speed.


> On average you include sleep deprived people, driving way over the speed limit, at night, in bad weather, while drunk, and talking to someone. FSD is very likely situationally useful.

Are those people also able to suprevise FSD like the law and Tesla expects them to? That's also a question.


FSD will pull over and stop if it detects the driver has passed out. Can the law do that automatically?

> you greatly overestimate humans

Tesla's FSD still goes full-throttle dumbfuck from time to time. Like, randomly deciding it wants to speed into an intersection despite the red light having done absolutely nothing. Or swerving because of glare that you can't see, and a Toyota Corolla could discern with its radars, but which hits the cameras and so fires up the orange cat it's simulating on its CPU.


Yeah even corollas have better sensors than a Tesla for driving in fog. It's embarrassing.

> I'm less sceptical that FSD, supervised, is safer than fully-manual control.

I'm very skeptical that the average human driver properly supervises FSD or any other "full" self driving system.


Supervised FSD — automating 99.9% of driving and expecting drivers to be fully alert for the other .1% — appears to go against everything we know about human attention.

this ^^

> betting actual money on those claims

Insurance companies can let marketing influence rates to some degree, with programs that tend to be tacked on after the initial rate is set. This self driving car program sounds an awful lot like safe driver programs like GEICO Clean Driving Record, State Farm Good Driver Discount, and Progressive Safe Driver, Progressive Snapshot, and Allstate Drivewise. The risk assessment seems to be less thorough than the general underwriting process, and to fall within some sort of risk margin, so to me it seems gimmicky and not a true innovation at this point.


Lemonade will have some actual claim data to support this already, not relying on the word of Tesla.

They don’t bet money on just “I’m quite skeptical because I hate the man”, but on actual data provided by the company.

That’s the difference.


The skepticism and hate is based on observing decades of shameless dishonesty, which is itself a form of data provided by the company: https://motherfrunker.ca/fsd/

Still doesn’t change my point: as of today being skeptic because relying on outdated data or historical series is just nonsense. I mean, insurance quotes work in a totally different way.

Do you drive a HW4? I’m 90% FSD on my total car miles

It's all a part of focusing on their core business, like… paying an $28M bribe to Melania Trump.


The other huge, huge difference is that one of the Steves has demonstrated he was able to build a successful product without the other's assistance.


You could say that about the iPod or the iPhone which Woz wasn't involved in, but when you do the math, there's only one Woz and he was essential to define the company in the 20th century, and look how many people it took to "replace" him when it came to Jobs "alone" defining the company in the 21st century.


You could also say it about the Mac, which Woz was, at best, peripherally involved in. Not saying that Jobs created these products "alone" — he obviously did not. But he was a key contributor.

Meanwhile, Woz has been involved in all sorts of products, including a cryptocurrency, and I can't think of a single one that got significant traction.


Another thing that people fail to remember is that Woz designed the Apple II, which is what made Apple a highly profitable company for many years, but instead of embracing that success, Jobs repeatedly tried to kill and replace the Apple II with the Lisa, then the Macintosh, and drove Apple into financial trouble. Apple would have done better, at that time, by simply building more advanced and backwards compatible followups to the Apple II, which is what consumers actually wanted (the original Macintosh was an expensive piece of shit).

The Apple II had 7 expansion slots and was easy to open and service yourself. It was a machine designed for hackers, and it was highly flexible. Jobs kept trying to push his all-in-one closed design when it made no sense. He did unfortunately succeed eventually. What Jobs did after his return was to turn Apple into a "luxury brand", where iPhones are perceived a bit like Prada handbags. One thing I will give Apple is that there is still no PC equivalent to Apple laptops. That can probably only really happen if mainstream PC manufacturers fully embrace Linux.


As Henry Ford is (spuriously) claimed to have said: "If I'd asked my customers what they wanted, they'd have said a faster horse."

Apple did build Apple II models, up to and including the Apple IIgs. They had a good run. And the line was not without its flops — the Apple III was a notorious disaster, though allegedly more due to Jobs than Wozniak.

But none of the pure 8-bit PC vendors survived the 1980s. One of the better qualities of Jobs was that he was not afraid of the company disrupting itself — foregoing the short term success of the Apple II line in favor of the Mac, which in the long run was vastly superior. The same situation played out with the iPhone disrupting the iPod.


I get to visit my 90-year-old mother in law a few times a week to get her TV setup (Cable box running Android TV, connected to a TV running Android TV — FML) working again.

To make matters worse, the cable box remote works via Bluetooth, the TV remote over IR, so getting any universal remote that works with both AND is simple seems a difficult prospect.

What are people even doing for universal remotes these days? Our household is equipped with Logitech Harmony remotes, which are no longer being made, and I dread the day they stop working.


When Logitech announced they were stopping making them, I bought 3 new Logitech Harmony remotes. I'm on my last one! I don't know what I am going to do after that one dies :-(


Never heard that one (it may indeed be used that way, but if it were the only reason Apple would probably keep it in the Apple internal parts of their OS installs).

It would also be of limited use, as the engine is purely CPU based; it is single threaded and does not even use SIMD AFAIK, let alone GPU features or the neural engines.


It can't be deleted because it's part of the system tools :)


A time limit is also deterministic in some sense. Level settings used to be mainly time based, because computers at lower settings were no serious competition to decent players, but you don't necessarily want to wait for 30 seconds each move, so there were more casual and more serious levels.

Limiting the search depth is much more deterministic. At lower levels, it has hilarious results, and is pretty good at emulating beginning players (who know the rules, but have a limited skill of calculating moves ahead).

One problem with fixed search depth is that I think most good engines prefer to use dynamic search depth (where they sense that some positions need to be searched a bit deeper to reach a quiescent point), so they will be handicapped with a fix depth.


> One problem with fixed search depth is that I think most good engines prefer to use dynamic search depth (where they sense that some positions need to be searched a bit deeper to reach a quiescent point), so they will be handicapped with a fix depth.

Okay, but I want to point out nobody was suggesting a depth limit.

For a time-limited algorithm to work properly, it has to have some kind of sensible ordering of how it evaluates moves, looking deeper as time passes in a dynamic way.

Switch to an iteration limit, and the algorithm will still have those features.


Heh, I was just discussing this some minutes ago: https://news.ycombinator.com/item?id=46595777


Oh, this led me down a rabbit hole…

I was maintainer of the Chess app from the early 2000s to about 2015. We first noticed in 2004 that level 1 (which was then "Computer thinks for 1 second per move) was getting stronger with each hardware generation (and in fact stronger than myself).

So we introduced 3 new levels, with the Computer thinking 1, 2, or 3 moves ahead. This solved the problem of the engine getting stronger (though the jump from "3 moves ahead" to "1 second" got worse and worse).

A few years after I had handed off the project, somebody decided to meddle with the level setting code (I was not privy to that decision). The time based levels were entirely replaced with depth based levels (which eliminates the strength inflation problem, but unfortunately was not accompanied by UI changes). But for some reason, parsing of the depth setting was broken as well, so the engine now always plays at depth 40 (stronger than ever).

This should be an easy fix, if Apple gets around to make it (Chess was always a side project for the maintainers). I filed feedback report 21609379.

It seems that somebody else had already discovered this and fixed it in a fork of the open source project: https://github.com/aglee/Chess/commit/dfb16b3f32e5a6633d2119...


Reportedly, Meta is paying top AI talent up to $300M for a 4 year contract. As much as I'm in favor of paying engineers well, I don't think salaries like this (unless they are across the board for the company, which they are of course not) are healthy for the company long term (cf. Anthony Levandowski, who got money thrown after him by Google, only to rip them off).

So I'm glad Apple is not trying to get too much into a bidding war. As for how well orgs are run, Meta has its issues as well (cf the fiasco with its eponymous product), while Google steadily seems to erode its core products.


Why would paying everyone $300M across the board be healthier than using it as a tool to (attempt to) attract the best of the best?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: