I don't know if it's intentional, isn't this the ultimate form of liberalism? To give the individual full autonomy, unshackled from the dependencies of family, neighbors, community, and any other local associations individuals are "born in". Seems like we are exactly where we've been aiming at for a couple hundred years.
Yeah, liberalism preaches selfishness under the false guise of pseudo-individualism. Everyone today is special, unique individual, but ironically, at the same time, almost identical to the next special, unique individual, with their identity constructed by fervent consumption of the same mass produced goods and images.
Final product of liberalism is Nietzsche's Last Man.
yes, because he forgot about the car. The reason we don’t befriend our neighbors is because as soon as we leave our home, we put ourselves in a metal cage, ensuring no one will talk to us if we don’t want them to.
Befriending your neighbors kind of works in a city but only REALLY works in Amish communities.
> they're not strictly necessary. We know this because humans can drive without a LIDAR
and propellers on a plane are not strictly necessary because birds can fly without them? The history of machines show that while nature can sometimes inspire the _what_ of the machine, it is a very bad source of inspiration for the _how_.
What caused textile machines to replace the manual labor wasn’t the quality of their output, it was quantity. In fact, manually made clothing was of higher quality than what was machine-produced.
Safety critical (will kill someone if not bug free) code makes up <1% of what's shipped, safety clothes which must be of high quality else risk harm to someone make up a similarly small percent
Both will stay manual / require high level of review they're not what's being disrupted (at-least in near term) - it's the rest.
This is a distinction without a difference. Even if you take a rudimentary raw cloth comparison like cotton vs heavy wool (the latter being fire resistant and used historically used by firemen, ie. “Safety critical”), the machines’ output quality was significantly lower than manual output for the latter.
This phenomenon is a general one… chainsaws vs hand saws, bread slicers vs hand slicing, mechanical harvesters vs manual harvesting, etc.
That’s just not the general case at all. Automated or “powered” processes generally lead to a more consistent final product. In many cases the quality is just better than what can be done by hand.
Have plenty of people, quite literally worth less than most material goods (evident from current social positions and continued trajectories) so why would companies care if it makes more money overall? Our lives have a value and in general its insultingly low.
The machines we’re talking about made raw cloth not clothing and it was actually higher quality in many respects because of accuracy and repeatability.
Almost all clothing is still made by hand one piece at a time with sewing machines still very manually operated.
“ …by the mid‑19th century machine‑woven cloth still could not equal the quality of hand‑woven Indian cloth. However, the high productivity of British textile manufacturing allowed coarser grades of British cloth to undersell hand‑spun and woven fabric in low‑wage India” [0]
“…the output of power looms was certainly greater than that of the handlooms, but the handloom weavers produced higher quality cloths with greater profit margins.” [1]
The same can be said about machines like the water frame. It was great at spinning coarse thread, but for high quality/luxury textile (ie. fine fabric), skilled (human) spinners did a much better job. You can read the book Blood in the Machine for even more context.
The problem with those quotes is the lack of definition of “quality”. Machine woven cloth in many ways is better because of consistency and uniformity.
If your goal is to make 1000 of the exact same dress, having a completely consistent raw material is synonymous with high quality.
It’s not fair to say that machines produced some kind of inferior imitation of the handmade product, that only won through sheer speed and cost to manufacture.
The environmental problem is enough for us to pump the brakes. By the end of this year, AI systems will be responsible for half of global data center power demand… 23 gigawatts. For what? A more useful search engine, a better autocomplete, and a shit code generator. Is it worth it? Are we even asking that question? When does it become not worth it? Who’s even running the calculus? The free market certainly isn’t.
I interpreted his post as saying it's not binary safe/unsafe, but rather a spectrum, with Java safer than C because of particular features that have pros and cons, not because of a magic free safe/unsafe switch. He's advocating for more nuance, not less.
Yeah, it's not binary; it's just a step function. /s
No, it's as close to binary as you can get. Is your only source of Undefined Behavior FFI specially marked functions and/or packages? Have you checked data races for violating thread safety invariants? If yes - You're safe.
Is Go in mostly safer than C++? Maybe. But you can never prove that about either of them. So while you may pretend one is safer than the other, it's a bit like picking which boat is taking on more water.
Can you prove Rust code is safe? Well there is the simple way - no unsafe. But what about unsafe blocks? Yes, you can prove it for them as well. If the unsafe code block is it will note safety invariants and why are they preserved by unsafe block. Can this be practically done? Depends on the crate, but with enough effort, yes.
Can you show RCE using this? Because, to this day, no one has been able to show me a reasonable program that someone would write and that would result in RCE from "Go memory unsafety" presented in this article. Meanwhile, I can show you thousands of examples and CVEs of how you can easily get RCE using C++.
> Can you prove Rust code is safe? Well there is the simple way - no unsafe. But what about unsafe blocks? Yes, you can prove it for them as well. If the unsafe code block is it will note safety invariants and why are they preserved by unsafe block. Can this be practically done? Depends on the crate, but with enough effort, yes.
You can’t prove Rust code "safe" in the absolute. Safety guarantees apply to safe Rust under the language’s (still evolving) rules, and even then the compiler/backend must uphold them. We still hit unsoundness[1] and miscompiles in safe code (equal pointers comparing unequal... [2]), and the official unsafe code guidelines are not a finalized spec. So documenting invariants in unsafe helps a lot, but it’s not a formal proof, especially across crates and compiler versions.
Neither are memory safe, so if you're going by the "safe in practice" definition then it has to be verified experimentally. Hence - maybe.
> Can you show RCE using this?
RCE and Undefined Behavior are two intersecting sets. Not all UB is RCE, but what all UBs are hard to track bugs that happen at most inconvenient times.
> You can’t prove Rust code "safe" in the absolute.
Sure you can't prove that any Turing machine has some property X or not. But Rust Belt (pdf https://hal.science/hal-01633165v2/document) is proof that safety of safe blocks is extensible and can apply to safe interfaces encapsulating unsafe well.
> We still hit unsoundness[1] and miscompiles in safe code (equal pointers comparing unequal... [2])
Your [1] is an LLVM bug.
As for [2] yeah there ARE bugs, wrong flags, bus they are fixing it and triggering most requires stuff like nightly, hitting bugs in specific hardware/LLVM, or very contrived trait constructions.
I mean sure by that token nothing is ever safe, reality is crooked, coins have three sides, and white is black, so traffic crossing are mass hallucinations.
> On the safety spectrum: C/C++ -> Zig -> Go -> Rust
Honestly it goes like this.
C -> C++ --> Zig ------> Go --------------------------------------------------------------------------> Rust --> Ada Core
> Neither are memory safe, so if you're going by the "safe in practice" definition then it has to be verified experimentally. Hence - maybe.
This is a ridiculous claim that it’s only "maybe". It’s so obvious, it’s like saying cars are not safe to drive, but if you use seatbelts and have airbags, then MAYBE they’re safer. I have verified this experimentally, like millions of other people. This argument is totally in bad faith, given the sea of CVEs caused by memory safety issues in C++ versus the almost virtually non-existent problem in safe Go in practice.
> Your [1] is an LLVM bug.
Yes, unfixed for two years. I don’t have this bug in Go, for example, so why, as a Rust user, should I care whose fault it is? If you buy a car and the engine doesn’t work as it should in some cases, do you accept the manufacturer saying, "Well, that’s the engine manufacturer’s issue, so all is OK"?
> As for [2] yeah there ARE bugs, wrong flags, bus they are fixing it and triggering most requires stuff like nightly, hitting bugs in specific hardware/LLVM, or very contrived trait constructions.
That went fast from "proven to be safe" to "yeah there are bugs".
> Honestly it goes like this. C -> C++ --> Zig ------> Go --------------------------------------------------------------------------> Rust --> Ada Core
And how did you arrive at these numbers of "—"? Did you "verify them experimentally"? Because I claim otherwise:
C --> C++ ----> Zig --------------> Go ----> Rust --> Ada Core
can you prove me wrong or prove you are right? You can't.
It seems we can only agree on the ranking of the languages.
> This is a ridiculous claim that it’s only "maybe". It’s so obvious.
Given enough effort, you can banish all UB and their related CVEs from a codebase. So it becomes a contest of which library had more scrutiny. I.e. you can compare a battle-tested library like cURL to stuff like baby's first XML parser. Plus, seeing how something is safe in practice is much different than just being memory safe.
> That went fast from "proven to be safe" to "yeah there are bugs".
Modulo compiler/hardware bugs goes without saying. Nothing really can exist in vacuum. You could prove your program is proven to work correctly, but if you put it on a platform where carry can randomly flip, and your perfect proof falls flat.
> Yes, unfixed for two years. I don’t have this bug in Go, for example, so why, as a Rust user, should I care whose fault it is?
Because it will be eventually fixed, unlike Go's design (that said they could change their tune and fix it, and I'll respectfully correct my statements). Then again, it's not like it is in scope for Rust. It's a bug in LLVM.
Plus, `unsound` bugs get extra scrutiny. Hell they had to write a new trait solver, Polonius, to solve problems some traits presented to the safety system.
> And how did you arrive at these numbers of "—"?
How many UBs do you leave open? How many errors other errors can your program prevent (e.g. do you allow `null`/`nil`)? And was this an error extremely obvious at time of writing (billion dollar mistake)?
> Given enough effort, you can banish all UB and their related CVEs from a codebase.
Sure. With infinite energy, anything’s possible - we can prevent all bugs. The problem is, we don’t have infinite energy.
> So it becomes a contest of which library had more scrutiny. I.e. you can compare a battle-tested library like cURL to stuff like baby's first XML parser.
I agree that software varies in quality, and that different people and teams can produce very different levels of quality. The issue is that we’re talking about languages used by many different people with varying capabilities and levels of scrutiny.
What really annoys me is that my phone can get hit with an RCE just from someone sending me a message. That’s exactly the kind of vulnerability that happens because languages like C or C++ are so easy to misuse, due to their complexity and lack of safety. You just can’t compare that to Go or Rust, they’re in a completely different galaxy.
> How many UBs do you leave open? How many errors other errors can your program prevent (e.g. do you allow `null`/`nil`)? And was this an error extremely obvious at time of writing (billion dollar mistake)?
Everything is a trade off. I find Go to be a middle ground where I can offload much of the memory management complexity to the garbage collector, yet still have control over the aspects I care about, with acceptable performance for multi-threaded networking code.
I make extensive use of atomics and mutexes and I don’t need "fearless concurrency," because I can only recall one serious concurrency bug I’ve ever had (a data race) which took some debugging time to track down, but it wasn’t in Go. YMMV.
As for the “billion dollar mistake”, I understand the argument in the context of C or C++, but not in the context of Go. Once every few months, I get a nil pointer dereference notification from the monitoring stack. The middle network layer will report the error to the user, I fix it, and move on, a $0 mistake.
I used Rust in many other projects, where I would never use C++ or C. Rust has a higher cognitive load, more language complexity, and refactoring is painful but it’s a trade off.
Under Go’s memory model, there’s really only one form of undefined behavior: a program with a data race has no defined semantics. That’s pretty much it. Compare that to C or C++, it’s like I said, a different galaxy. I find the whole discussion around Go’s safety exaggerated, and more theoretical than what actually comes up in practice.
> Sure. With infinite energy, anything’s possible - we can prevent all bugs. The problem is, we don’t have infinite energy.
You don't need infinite energy but it is a significant undertaking. The UB seem to obey a power law in regards to their lifetime i.e. every X years number of UB caused problems halves.
> I can only recall one serious concurrency bug I’ve ever had (a data race) which took some debugging time to track down, but it wasn’t in Go. YMMV.
That's what is nasty about UB in data races. It's not trivial to find and it's a pain in the neck to reproduce. So even taking you at your word, you not having issues with data races isn't the same as "I wrote code free of data races".
> As for the “billion dollar mistake”, I understand the argument in the context of C or C++, but not in the context of Go.
Even without a chance of UB. You're adding an implicit `Type | null` to every piece that uses nullable code without handling the null case. Each time you forget to do it, you cause either an UB or a null pointer error. And the place it manifests is different from where it's generated.
Furthermore, listening to Tony Hoare's talk (https://youtu.be/ybrQvs4x0Ps?t=1682), he mentions that trying to avoid the null, in the context of a language like Java or C#, that permits them is also causing people to waste time working around them.
> The middle network layer will report the error to the user, I fix it, and move on, a $0 mistake.
Sure, but that's not a $0 mistake. It's {time to fix * hourly rate}. Even if you're doing for OSS, you could have been spending that time doing something else.
> Everything is a trade off.
Sure. But trading "ease of use" for "preventing errors" is not something I want in any language I use. Null/nil are about as good concepts today as silently skipping errors.
> Even without a chance of UB. You're adding an implicit `Type | null` to every piece that uses nullable code without handling the null case. Each time you forget to do it, you cause either an UB or a null pointer error. And the place it manifests is different from where it's generated.
> Furthermore, listening to Tony Hoare's talk (https://youtu.be/ybrQvs4x0Ps?t=1682), he mentions that trying to avoid the null, in the context of a language like Java or C#, that permits them is also causing people to waste time working around them.
It's an insignificant amount of time in my experience, which makes this irrelevant in practice. I’ve never had to "work around them", this is either a theoretical argument or just sloppy programming.
> Sure, but that's not a $0 mistake. It's {time to fix * hourly rate}. Even if you're doing for OSS, you could have been spending that time doing something else.
Oh, come on, you’re smart enough to understand that a few minutes every few months may not be exactly $0, but it’s such an insignificant value that it can be treated as $0.
> Sure. But trading "ease of use" for "preventing errors" is not something I want in any language I use.
Well, you can choose your trade offs, and you can let others choose theirs. So I guess you use Ada for everything? or maybe you use Coq to prove everything? Of course you don’t, you’re also making trade offs, just different ones.
> Null/nil are about as good concepts today as silently skipping errors.
You exaggerate again. I do not agree with that framing, and I back my opinion with experience from practice.
> Oh, come on, you’re smart enough to understand that a few minutes every few months may not be exactly $0, but it’s such an insignificant value that it can be treated as $0.
You're smart enough to know that just because you sum small things, you can aggregate them over many people and a long time (circa 50 years) to get to massive numbers.
> Well, you can choose your trade offs, and you can let others choose theirs. So I guess you use Ada for everything? or maybe you use Coq to prove everything? Of course you don’t, you’re also making trade offs, just different ones.
Ada doesn't give you full memory safety. I think you need Ada.Spark. I can't find as much teaching material on it, but definitely on my radar. Also, I'm more of a Lean guy myself, but it has a different purpose than Rust. I.e. proving things.