This has done the rounds on other platforms. A couple of important points:
- the failure here is that the car didn't stop for the bus on the other side of the road with the extended stop sign. (Obviously a kid running out from behind a car this late is fairly difficult for any human or self driving system to avoid)
- the FSD version for the robotaxi service is private and wasn't the one used for this test. The testers here only have access to the older public version, which is supposed to be used with human supervision
- the dawn project is a long-time Tesla FSD opponent that acts in bad faith - they are probably relying on false equivalence of FSD beta vs robotaxi FSD
Nevertheless this is a very important test result for FSD supervised! But I don't like that the dawn project are framing this as evidence for why FSD robotaxi (a different version) should not be allowed without noting that they have tested a different version.
I don't see why Tesla would deserve the benefit of the doubt here. We cannot know how well the actual Taxi software will work, I think it is fair to extrapolate from the parts we can observe.
re. extrapolation: I agree with that, but remember there's sampling error. The crashes/failures go viral but the lives saved get zero exposure or headlines. I don't think that means you can just ignore issues like this but I think it does mean it's sensible to try to augment the data point of this video with imagining the scenarios where the self driving car performs more safely than the average human driver
I absolutely do think that self-driving cars will save many lives in the long run. But I also think it is entirely fair to focus on the big, visible mistakes right now.
This is a major failure, failing to observe a stop sign and a parked school bus are critical mistakes. If you can't manage those you're not ready to be on the road without a safety driver yet. There was nothing particularly difficult about this situation, these are the basics you must handle reliably before we even get to alle the tricker situations those cars will encounter in the real world at scale.
I agree it's a major mistake + should get a lot of focus from the FSD team. I'm just unsure whether that directly translates to prohibiting a robotaxi rollout (I'm open to the possibility it should though).
I guess the thing I'm trying to reconcile is that even very safe drivers make critical mistakes extremely rarely, so the threshold at which FSD is safer than even the top 10% of human drivers likely includes some nonzero level of critical mistakes. Right now Tesla has several people mining FSD for any place it makes critical mistakes and these are well publicised so I think we get an inflated sense of their commonality. This is speculation, but if true it leaves some possibility of it being significantly safer than the median driver while still allowing for videos like this to proliferate.
I do wish Tesla released all stats for interventions/near misses/crashes so we could have a better and non-speculative discussion about this!
Caveat/preface to prevent trolls: FSD is a sham and money grab at best, death trap at worst, etc.
But, I've read through your chain of rplies to OP and maybe I can help with my POV.
OP is replying in good faith showing "this sampling incident is out of scope of production testing/cars for several reasons, all greatly skewing the testing from this known bad actor source."
And you reply with "Zero systemic reproducible mistakes is the only acceptable critera."
Well then, you should know, that is the current situation. In tesla testing, they achieve this. The "test" in this article, which the OP is pointing out, is not a standardized test via Tesla on current platforms. SO be careful with your ultimatums, or you might give the corporation a green light to say "look! we tested it!".
I am not a tesla fan. However, I also am aware that yesterday, thousands of people across the world where mowed down by human operators.
If I put out a test video showing that a human runs over another human with minimum circumstances met, IE; rain, distraction, tires, density, etc., would you call for a halt on all human driving? Of course not, you'd investigate the root cause, which is most of the time, distracted or impaired driving.
Tesla would like to replace that with FSD. (And make a boatload of money as a result)
My point is that we therefore can (and should!) hold Tesla to higher standards.
'Better than human' as a bar invites conflict of interest, because at some point Tesla is weighing {safety} vs {profit}, given that increasing safety costs money.
If we don't severely externally bias towards safety, then we reach a point where Tesla says 'We've reached parity with human, so we're not investing more money in fixing FSD glitches.'
And furthermore, without mandated reporting (of the kind Musk just got relaxed), we won't know at scale.
No! Ignoring a stop sign is such a basic driving standard that it's an automatic disqualification.
A driver that misses a stop sign would not have my kids in their car.
They could be the safest driver on the racetrack it does not matter at that point.
In the video (starting at ~13 seconds), the Tesla is at least 16 and probably 20 car lengths from the back of the bus with the bus red flashing lights on the entire time.
If the Tesla can't stop for the bus (not the kid) in 12 car lengths, that's not p-hacking, that's Tesla FSD being both unlawful and obviously unsafe.
I actually didn't say that and am not arguing it formally - I said what I said because I think that the version difference is something that should be acknowledged when doing a test like this.
I do privately assume the new version will be better in some ways, but have no idea if this problem would be solved in it - so I agree with your last sentence.
I don't think there's an official divergence in terminology other than the version numbers (which have mostly stopped incrementing for FSD vehicle owners, meanwhile there is a lot of work going into new iterations for the version running on tesla's robotaxi fleet)
Then I struggle to understand what they should have acknowledged here concerning the software used? That they do not have access to a version of FSD which currently isn’t accessible to the public? I’d think that’s self-evident for any Organisation not affiliated with Tesla.
Exactly what I wanted to add. Every single time there is hard evidence what a failure FSD is, someone points out that they didn't use the latest Beta. And of course they provide zero evidence that this newer version actually addresses the problem. Anyone who knows anything about Software and updates understand how new versions can actually introduce new problems and new bugs …
In this case, you stop for the bus that is displaying the flashing red lights.
Every driver/car that obeys the law has absolutely no problem avoiding this collision with the child, which is why the bus has flashing lights and a stop sign.
If it merely slowed down, it would still be driving illegally. Everywhere I've driven in the USA, if there is a stopped school bus in front of you, or on the other side of the road, all traffic on the road must stop and remain stopped until the school bus retracts the sign and resumes driving.
I mean, it can be programmed in any way, so why not program it to follow both rules: If the sign is extended, then stop. Else if the sign is not extended, slow down.
if I put a stop sign somewhere is that legal? or there's some statute (or at least local ordinance) that says that yellow buses on this and this route can have moving stop signs?
Yes, in the US, school buses come equipped with a stop sign that they extend when stopping to let kids on and off the bus. You (as a driver) are required to stop (and not pass) on both sides of the road while the bus has their sign extended.
> Obviously a kid running out from behind a car this late is fairly difficult for any human or self driving system to avoid
Not really - it's a case of slowing down and anticipating a potential hazard. It's a fairly common situation with any kind of bus, or similarly if you're overtaking a stationary high-sided vehicle as pedestrians may be looking to cross the road (in non-jay-walking jurisdictions).
Yes. And given the latency of cameras (or even humans) not being able to see around objects and that dogs and kids and move fast from hidden areas into the path, driving really slow next to large obstacles until able to see behind them becomes more important.
One of the prime directives of driving for humans and FAD systems must be "never drive faster than brakes can stop in visible areas". This must account for such scenarios as obstacles stopped or possible coming the wrong way around a mountain turn.
> never drive faster than brakes can stop in visible areas
Here in the UK, that's phrased in the Highway Code as "Drive at a speed that will allow you to stop well within the distance you can see to be clear". It's such a basic tenet of safety as you never know if there's a fallen tree just round the next blind corner etc. However, it doesn't strictly apply to peds running out from behind an obstruction as your way ahead can be clear, until suddenly it isn't - sometimes you have to slow just for a possible hazard.
And that should be a case where automated systems can easily beat people. It's "just" math, there's no interpretation, no reasoning, no reading signs, no predicting other drivers, just crunching the numbers based on stopping distance and field of view. If they can't even do this, what are they for?
The simulation of a kid running out from behind the bus is both realistic, and it points out another aspect of the problem with FSD. It didn't just pass the bus illegally. It was going far too fast while passing the bus.
As for being unrepresentative of the next release of FSD, we've had what eight years ten years of it's going to work on the next release.
- FSD has been failing this test publicly for almost three years, including in a Super Bowl commercial. It strains credulity to imagine that they have a robust solution that they haven't bothered to use to shut up their loudest critic.
- The Robotaxi version of FSD is reportedly optimized for a small area of austin, and is going to extensively use tele-operators as safety drivers. There is no evidence that Robotaxi FSD isn't "supposed" to be used with human supervision, its supervision will just be subject to latency and limited spatial awareness.
- The Dawn Project's position is that FSD should be completely banned because Tesla is negligent with regard to safety. Having a test coincide with the Robotaxi launch is good for publicity but the distinction isn't really relevant because the fundamental flaw is with the companies approach to safety regardless of FSD version.
- Tesla doesn't have an inalienable right to test 2-ton autonomous machines on public roads. If they wanted to demonstrate the safety of the robotaxi version they could publish the reams of tests they've surely conducted and begin reporting industry standard metrics like miles per critical disengagement.
>(Obviously a kid running out from behind a car this late is fairly difficult for any human or self driving system to avoid)
Shouldn't that be the one case where self driving system has an enormous natural advantage? It has faster reflexes, and it doesn't require much, if any, interpretation or understanding of signs or predictive behavior of other drivers. At the very worst, the car should be able to detect a big object in the road and try to brake and avoid the object. If the car can't take minimal steps to avoid crashing into any given thing that's in front of it on the road, what are we even doing here?
I agree - it's why I think some type of radar/lidar is necessary for autonomous vehicles to be sufficiently safe. I get the theory that humans can process enough info from two eyes to detect objects, so multiple cameras should be able to do the same, but it looks like it's a tough problem to solve.
My understanding is that Tesla is the only manufacturer trying to make self-driving work with just visual-spectrum cameras --- all other vendors use radar/lidar _and_ visual-spectrum cameras.
They've painted themselves into a corner really, as if they start using anything other than just cameras, they'll be on the hook for selling previous cars as being fully FSD-capable and presumably have to retrofit radar/lidar to anyone that was mis-sold a Tesla.
I’ve seen this “next version” trick enough times and in enough contexts to know when to call BS. There’s always a next version, it’s rarely night-and-day better, and when it is better you’ll have evidence, not just a salesman’s word. People deserve credit/blame today for reality today, and if reality changes tomorrow, tomorrow is when they’ll deserve credit/blame for that change. Anybody who tries to get you to judge them today for what you can’t see today is really just trying to make you dismiss what you see in favor of what you’re told.
- the failure here is that the car didn't stop for the bus on the other side of the road with the extended stop sign. (Obviously a kid running out from behind a car this late is fairly difficult for any human or self driving system to avoid)
- the FSD version for the robotaxi service is private and wasn't the one used for this test. The testers here only have access to the older public version, which is supposed to be used with human supervision
- the dawn project is a long-time Tesla FSD opponent that acts in bad faith - they are probably relying on false equivalence of FSD beta vs robotaxi FSD
Nevertheless this is a very important test result for FSD supervised! But I don't like that the dawn project are framing this as evidence for why FSD robotaxi (a different version) should not be allowed without noting that they have tested a different version.