- Minecraft-- probably the game with the most replay value of anything of all time.
I don't know what will stand the test of time. I don't want to play any of these games now, since I've burnt them out, but at some point I'll likely want to play them again...
Playing Metal Gear Solid 2 was one of my fondest memories I cherish. I could play it only at Taekwondo gym I was attending to. I couldn't finish it because I only had a couple of hours at the gym and I could play only during break time. Oh and I was always waiting for the break time!
Street Fighter 2 Championship Edition (whichever was the one with the most characters) as well as Street Fighter Alpha were great for the arcade machine.
Most of my buddies at the time would come over, have a beer, immediately hang it on the boat-coozy cup holders (the ones that gyro) and go to town shoulder to shoulder playing SF2. The cup holders gyro would prevent the beers from spilling as the arcade cabinet rocked back and forth from two grown men having a virtual fist fight. Best times.
> How many human drivers do you think would pass the bar you're setting?
How many humans drivers would pass it, and what proportion of the time? Even the best drivers do not constantly maintain peak vigilance, because they are human.
> IMO, the bar should be that the technology is a significant improvement over the average performance of human drivers (which I don't think is that hard), not necessarily perfect.
In practice, this isn't reasonable, because "hey we're slightly better than a population that includes the drunks, the inattentive, and the infirm" is not going to win public trust. And, of course, a system that is barely better than average humans might worsen safety, if it ends up replacing driving by those who would normally drive especially safe.
I think "better than the average performance of a 75th or 90th percentile human driver" might be a good way to look at things.
It's going to be a weird thing, because odds are the distribution of accidents that do happen won't look much like human ones. It will have superhuman saves (like that scooter one), but it will also crash in situations that we can't really picture humans doing.
I'm reminded of airbags; even first generation airbags made things much safer overall, but they occasionally decapitated a short person or child in a 5MPH parking lot fender bender. This was hard for the public to stomach, and if it's your kid who is internally decapitated by the airbag in a small accident, I don't think you'll really accept "it's safer on average to have an airbag!"
The parent comment said the bar should be "significant improvement" over the average performance of human drivers.
Then you said, "this isn't reasonable", and the bar shouldn't be "slightly better" or "barely better". It should be at least better than the 75th percentile driver.
It sounds like you either misread the parent comment or you're phrasing your response as disagreement despite proposing roughly the same thing as the parent comment.
All depends on what you read as "significant improvement".
A 20% lower fatal crash rate compared to the average might be a significant improvement-- from a public health standpoint, this is huge if you could reduce traffic deaths by 20%.
But if you don't get the worst drivers to replace their driving with autonomous, that "20% less than average" might actually make things worse. That's my point. The bar has to be pretty dang high to be sure that you will actually make things better.
> In practice, this isn't reasonable, because "hey we're slightly better than a population that includes the drunks, the inattentive, and the infirm" is not going to win public trust.
Sadly, you're right, but as rational people, we can acknowledge that it should. I care about reducing injuries and deaths, and the %tile of human performance needed for that is probably something like 30%ile. It's definitely well below 75%ile.
> > And, of course, a system that is barely better than average humans might worsen safety, if it ends up replacing driving by those who would normally drive especially safe.
It's only if you get the habitually drunk (a group that is overall impoverished), the very old, etc, to ride Waymo that you reap this benefit. And they're probably not early adopters.
Uber and Lyft were supported by police departments because they reduced drunk driving. Drunk driving isn't just impoverished alcoholics. People go to bars and parts and get drunk all the time.
You also solve for people texting (or otherwise using their phones) while driving, which is pretty common among young, tech-adopting people.
> Drunk driving isn't just impoverished alcoholics. People go to bars and parts and get drunk all the time
Yes, but the drivers who are 5th percentile drivers who cause a huge share of the most severe accidents are "special" in various ways. Most of them are probably not autonomy early adopters.
The guy who decided to drive on the wrong side of a double yellow on a windy mountain road and hit our family car in a probable suicide attempt was not going to replace that trip with Waymo.
Many states in the US have the Basic Speed Law, e.g. California:
> No person shall drive a vehicle upon a highway at a speed greater than is reasonable or prudent having due regard for weather, visibility, the traffic on, and the surface and width of, the highway, and in no event at a speed which endangers the safety of persons or property.
The speed limit isn't supposed to be a carte blanche to drive at that speed no matter what; the basic speed law is supposed to "win." In practice, enforcement is a lot more clear cut at the posted speed limit and officers don't want to write tickets that are hard to argue in court.
That law seems more likely to assign blame to drivers if they hit someone. So practically it's not enforced but in accidents it becomes a justification for assigning fault.
I mean yeah. If you were traveling at some speed and caused damage to persons or property, that's reasonable, but refutable, evidence that you were traveling at a speed that endangered persons or property.
And at the same time, if you were traveling at some speed and no damage was caused, it's harder to say that persons or property were endangered.
> But if there are a bunch of children milling about an elementary school in a chaotic situation with lots of double parking, 17 mph is too fast
Hey, I'd agree with this-- and it's worth noting that 17^2 - 5^2 > 16^2, so even 1MPH slower would likely have resulted in no contact in this scenario.
But, I'd say the majority of the time it's OK to pass an elementary school at 20-25MPH. Anything carries a certain level of risk, of course. So we really need to know more about the situation to judge the Waymo's speed. I will say that generally Waymo seems to be on the conservative end in the scenarios I've seen.
(My back of napkin math says an attentive human driver going at 12MPH would hit the pedestrian at the same speed if what we've been told is accurate).
> Hey, I'd agree with this-- and it's worth noting that 17^2 - 5^2 > 16^2, so even 1MPH slower would likely have resulted in no contact in this scenario.
Only with instant reaction time and linear deceleration.
Neither of those are the case. It takes time for even a Waymo to recognize a dangerous situation and apply the brake and deceleration of vehicles is not actually linear.
> It takes time for even a Waymo to recognize a dangerous situation
Reaction time makes the math even better here. You travel v1 * reaction_time no matter what, before entering the deceleration regime. So if v1 gets smaller, you get to spend a greater proportion of time in the deceleration regime.
> linear deceleration.
After reaction time, stopping distance is pretty close to n^2. There's weird effects at high speed (contribution from drag) and at very low speed, but they have pretty modest contributions.
Swedish schools still have students who walk there. I live near one and there are very few cars that exceed 20km/h during rush hours. Anything faster is reckless even if the max over here is 30 km/h (19 mph).
The schools I'm thinking of have sidewalks with some degree of protection/offset from street, and the crossings are protected by human crossing guards during times when students are going to schools. The posted limits are "25 (MPH) When Children Are Present" and traffic generally moves at 20MPH during most of those times.
There are definitely times and situation where the right speed is 7MPH and even that feels "fast", though, too.
It seems it was driving pretty slow (17MPH) and they do tend to put in a pretty big gap to the right side when they can.
There are kinds of human sensing that are better when humans are maximally attentive (seeing through windows/reflections). But there's also the seeing-in-all-directions, radar, superhuman reaction time, etc, on the side of the Waymo.
The original death finding falls just from simple back-of-napkin math.
87 ng/mL.
Baby eats 30mL per hour. That's 2.6 micrograms of morphine.
Elimination half life in neonates of ~8 hours means 30 micrograms in system at equilibrium if constantly fed this and the baby absorbs all of it (takes 4-5 half lives to get to that) and pharmacokinetics are linear. In reality a neonate likely absorbs well under 1/3rd, so you'd expect under 10 micrograms in equilibrium.
25-50 micrograms/kilogram is normal dosing of morphine in a neonate when it is necessary, every 6 hours (resulting in a peak systemic concentration of ~60-120 ug/kg after repeated dosing).
Compare -- 60-120 ug/kg therapeutic dosing to 10 micrograms in the neonate's body (3-4 kilos, so 3 ug/kg??)
And then, you end up with acetaminophen and codeine in the neonate's stomach, with no morphine... Even though these do not end up in breast milk in significant quantities.
Yah, I feel like Linux was way worse with printers in the past.. now the story is more like: you'll have a different set of printer issues across the major OSes but no OS is clearly better or worse.
> It was a real time computer NOT designed for speed but real time operations.
More than anything, it was designed to be small and use little power.
But these little ARM Cortex M4F that we're comparing to are also designed for embedded, possibly hard-real-time operations. And dominant factors in experience on playback through earbuds are response time and jitter.
If the AGC could get a capsule to the moon doing hard real-time tasks (and spilling low priority tasks as necessary), a single STM32F405 with a Cortex M4F could do it better.
Actually, my team is going to fly a STM32F030 for minimal power management tasks-- but still hard real-time-- on a small satellite. Cortex-M0. It fits in 25 milliwatts vs 55W. We're clocked slow, but still exceed the throughput of the AGC by ~200-300x. Funnily enough, the amount of RAM is about the same as the AGC :D It's 70 cents in quantity, but we have to pay three whole dollars at quantity 1.
> NASA used a lot of supercomputers here on earth pior to mission start.
Fine, let's compare to the CDC 6600, the fastest computer of the late 60's. M4F @ 300MHz is a couple hundred single precision megaflops; CDC6600 was like 3 not-quite-double-precision megaflops. The hacky "double single precision" techniques have comparable precision-- figure that is probably about 10x slower on average, so each M4F could do about 20 CDC-6600 equivalent megaflops or is roughly 5-10x faster. The amount of RAM is about the same on this earbud.
His 486-25 -- if a DX model with the FPU -- was probably roughly twice as fast as the 6600 and probably had 4x the RAM, and used 2 orders of magnitude less power and massed 3 orders of magnitude less.
Control flow, integer math, etc, being much faster than that.
Just a few more pennies gets you a microcontroller with a double precision FPU, like a Cortex-M7F with the FPv4-SP-D16, which at 300MHz is good for maybe 60 double precision megaflops-- compared to the 6600, 20x faster and more precision.
I have thought about this a little more, and looked into things. Since NASA used the 360/91, and had a lot of 360's and 7900's... all of NASA's 60's computing couldn't quite fit into a single 486DX-25. You'd be more like 486DX2-100 era to replace everything comfortably, and you'd want a lot of RAM-- like 16MB.
It looks like NASA had 5 360/75's plus a 360/91 by the end, plus a few other computers.
The biggest 360/75's (I don't know that NASA had the highest spec model for all 5) were probably roughly 1/10th of a 486-100 plus 1 megabyte of RAM. The 360/91 that they had at the end was maybe 1/3rd of a 486-100 plus up to 6 megabytes of RAM.
Those computers alone would be about 85% of a 486-100. Everything else was comparatively small. And, of course, you need to include the benefits from getting results on individual jobs much faster, even if sustained max throughput is about the same. So all of NASA, by the late 60's, probably fits into one relatively large 486DX4-100.
Incidentally, one random bit of my family lore; my dad was an IBM man and knew a lot about 360's and OS/360. He received a call one evening from NASA during Apollo 13 asking for advice about how they could get a little bit more out of their machines. My mom was miffed about dinner being interrupted until she understood why :D
Ps. Try msp430 f model for low power. These can be CRAZY efficient.
Ps. Don't forget to short circuit the solar panel directly to system: then your satellite might talk even 50 years from now such as some HAM satellites from cold war (Oscar 7 I think)
NyanSat; I'm PI and mentor for a team of high school students that were selected by NASA CSLI.
> Ps. Try msp430 f model for low power. These can be CRAZY efficient.
Yah, I've used MSP430 in space. STM32F0 fits what we're using it for. The main flight computer we designed, and it's RP2350 with MRAM. Some of the avionics details are here: https://github.com/OakwoodEngineering/ObiWanKomputer
> Ps. Don't forget to short circuit the solar panel directly to system: then your satellite might talk even 50 years from now such as some HAM satellites from cold war (Oscar 7 I think)
Current ITU guidelines make it clear this is something we're not supposed to do to ensure that we can actually end transmissions by the satellite. We'll re-enter/burn up within
This is the "little part" of what fits into an earpiece. Each of those cores is maybe 0.04 square millimeters of die on e.g. 28nm process. RAM takes some area, but that's dwarfed by the analog and power components and packaging. The marginal cost of the gates making up the processors is effectively zero.
so 1mm2 peppered by those cores at 300MHz will give you 4 Tflops. And whole 200mm wafer - 100 Petaflops, like 10 B200s, and just at less than $3K/wafer. Giving half area to memory we'll get 50 PFlops with 300Gb RAM. Power draw is like 10-20KW. So, giving these numbers i'd guess Cerebras has tremendous margin and is just printing money :)
Yes, assuming you don't need to connect anything together and that RAM is tinier than it really is, sure. At 28nm, 3megabits/square millimeter is what you get of SRAM, so an entire wafer only gets you ~12 gigabytes of memory.
And, of course, most of Cerebras' costs are NRE and the stuff like getting heat out of that wafer and power in.
Same reason why Cerebras doesn't use DRAM. The whole point of putting memory close is to increase performance and bandwidth, and DRAM is fundamentally latent.
Also, process that is good at making logic isn't necessarily good for making DRAM. Yes, eDRAM exists, but most designs don't put DRAM on the same die as logic and instead stack it or put it off-chip.
Almost all these microcontrollers that are single-die have flash+SRAM. Almost all microprocessor cache designs are SRAM for these reasons (with some designs using off-die L3 DRAM)-- for these reasons.
>The whole point of putting memory close is to increase performance and bandwidth, and DRAM is fundamentally latent.
When the access patterns are well established and understood, like in the case of transformers, you can mitigate latency by prefetch (we can even have very beefed up prefetch pipeline knowing that we target transformers), while putting memory on the same chip gives you huge number of data lines thus resulting in huge bandwidth.
With embedded SRAM close, you get startling amounts of bandwidth -- Cerebras claims to attain >2 bytes/FLOP in practice -- vs H200 attaining more like 0.001-0.002 to the external DRAM. So we're talking about a 3 order of magnitude difference.
Would it be a little better with on-wafer distributed DRAM and sophisticated prefetch? Sure, but it wouldn't match SRAM, and you'd end up with a lot more interconnect and associated logic. And, of course, there's no clear path to run on a leading logic process and embed DRAM cells.
In turn, you batch for inference on H200, where Cerebras can get full performance with very small batch sizes.
- Any one of the 194_ games
- Legend of Zelda: A Link To The Past
- Super Mario World
- Final Fantasy VI, VII, IX
- Chrono Trigger (agree)
- Street Fighter 2 Championship Edition
- Metal Gear Solid 1-3, MGS: Peace Walker
But I think there's been good stuff since.
- The Super Mario Galaxy games
- Super Monkey Ball
- MGS4, MGS5
- Witcher 3
- The Bioshock games
- Minecraft-- probably the game with the most replay value of anything of all time.
I don't know what will stand the test of time. I don't want to play any of these games now, since I've burnt them out, but at some point I'll likely want to play them again...
- Undertale
- Bravely Default
- The Octopath games
- Dispatch
- AstroBot
- Clair Obscur
reply