Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Neither are memory safe, so if you're going by the "safe in practice" definition then it has to be verified experimentally. Hence - maybe.

This is a ridiculous claim that it’s only "maybe". It’s so obvious, it’s like saying cars are not safe to drive, but if you use seatbelts and have airbags, then MAYBE they’re safer. I have verified this experimentally, like millions of other people. This argument is totally in bad faith, given the sea of CVEs caused by memory safety issues in C++ versus the almost virtually non-existent problem in safe Go in practice.

> Your [1] is an LLVM bug.

Yes, unfixed for two years. I don’t have this bug in Go, for example, so why, as a Rust user, should I care whose fault it is? If you buy a car and the engine doesn’t work as it should in some cases, do you accept the manufacturer saying, "Well, that’s the engine manufacturer’s issue, so all is OK"?

> As for [2] yeah there ARE bugs, wrong flags, bus they are fixing it and triggering most requires stuff like nightly, hitting bugs in specific hardware/LLVM, or very contrived trait constructions.

That went fast from "proven to be safe" to "yeah there are bugs".

> Honestly it goes like this. C -> C++ --> Zig ------> Go --------------------------------------------------------------------------> Rust --> Ada Core

And how did you arrive at these numbers of "—"? Did you "verify them experimentally"? Because I claim otherwise:

C --> C++ ----> Zig --------------> Go ----> Rust --> Ada Core

can you prove me wrong or prove you are right? You can't. It seems we can only agree on the ranking of the languages.



> This is a ridiculous claim that it’s only "maybe". It’s so obvious.

Given enough effort, you can banish all UB and their related CVEs from a codebase. So it becomes a contest of which library had more scrutiny. I.e. you can compare a battle-tested library like cURL to stuff like baby's first XML parser. Plus, seeing how something is safe in practice is much different than just being memory safe.

> That went fast from "proven to be safe" to "yeah there are bugs".

Modulo compiler/hardware bugs goes without saying. Nothing really can exist in vacuum. You could prove your program is proven to work correctly, but if you put it on a platform where carry can randomly flip, and your perfect proof falls flat.

> Yes, unfixed for two years. I don’t have this bug in Go, for example, so why, as a Rust user, should I care whose fault it is?

Because it will be eventually fixed, unlike Go's design (that said they could change their tune and fix it, and I'll respectfully correct my statements). Then again, it's not like it is in scope for Rust. It's a bug in LLVM.

Plus, `unsound` bugs get extra scrutiny. Hell they had to write a new trait solver, Polonius, to solve problems some traits presented to the safety system.

> And how did you arrive at these numbers of "—"?

How many UBs do you leave open? How many errors other errors can your program prevent (e.g. do you allow `null`/`nil`)? And was this an error extremely obvious at time of writing (billion dollar mistake)?


> Given enough effort, you can banish all UB and their related CVEs from a codebase.

Sure. With infinite energy, anything’s possible - we can prevent all bugs. The problem is, we don’t have infinite energy.

> So it becomes a contest of which library had more scrutiny. I.e. you can compare a battle-tested library like cURL to stuff like baby's first XML parser.

I agree that software varies in quality, and that different people and teams can produce very different levels of quality. The issue is that we’re talking about languages used by many different people with varying capabilities and levels of scrutiny.

What really annoys me is that my phone can get hit with an RCE just from someone sending me a message. That’s exactly the kind of vulnerability that happens because languages like C or C++ are so easy to misuse, due to their complexity and lack of safety. You just can’t compare that to Go or Rust, they’re in a completely different galaxy.

> How many UBs do you leave open? How many errors other errors can your program prevent (e.g. do you allow `null`/`nil`)? And was this an error extremely obvious at time of writing (billion dollar mistake)?

Everything is a trade off. I find Go to be a middle ground where I can offload much of the memory management complexity to the garbage collector, yet still have control over the aspects I care about, with acceptable performance for multi-threaded networking code.

I make extensive use of atomics and mutexes and I don’t need "fearless concurrency," because I can only recall one serious concurrency bug I’ve ever had (a data race) which took some debugging time to track down, but it wasn’t in Go. YMMV.

As for the “billion dollar mistake”, I understand the argument in the context of C or C++, but not in the context of Go. Once every few months, I get a nil pointer dereference notification from the monitoring stack. The middle network layer will report the error to the user, I fix it, and move on, a $0 mistake.

I used Rust in many other projects, where I would never use C++ or C. Rust has a higher cognitive load, more language complexity, and refactoring is painful but it’s a trade off.

Under Go’s memory model, there’s really only one form of undefined behavior: a program with a data race has no defined semantics. That’s pretty much it. Compare that to C or C++, it’s like I said, a different galaxy. I find the whole discussion around Go’s safety exaggerated, and more theoretical than what actually comes up in practice.


> Sure. With infinite energy, anything’s possible - we can prevent all bugs. The problem is, we don’t have infinite energy.

You don't need infinite energy but it is a significant undertaking. The UB seem to obey a power law in regards to their lifetime i.e. every X years number of UB caused problems halves.

> I can only recall one serious concurrency bug I’ve ever had (a data race) which took some debugging time to track down, but it wasn’t in Go. YMMV.

That's what is nasty about UB in data races. It's not trivial to find and it's a pain in the neck to reproduce. So even taking you at your word, you not having issues with data races isn't the same as "I wrote code free of data races".

> As for the “billion dollar mistake”, I understand the argument in the context of C or C++, but not in the context of Go.

Even without a chance of UB. You're adding an implicit `Type | null` to every piece that uses nullable code without handling the null case. Each time you forget to do it, you cause either an UB or a null pointer error. And the place it manifests is different from where it's generated.

Furthermore, listening to Tony Hoare's talk (https://youtu.be/ybrQvs4x0Ps?t=1682), he mentions that trying to avoid the null, in the context of a language like Java or C#, that permits them is also causing people to waste time working around them.

> The middle network layer will report the error to the user, I fix it, and move on, a $0 mistake.

Sure, but that's not a $0 mistake. It's {time to fix * hourly rate}. Even if you're doing for OSS, you could have been spending that time doing something else.

> Everything is a trade off.

Sure. But trading "ease of use" for "preventing errors" is not something I want in any language I use. Null/nil are about as good concepts today as silently skipping errors.


> Even without a chance of UB. You're adding an implicit `Type | null` to every piece that uses nullable code without handling the null case. Each time you forget to do it, you cause either an UB or a null pointer error. And the place it manifests is different from where it's generated. > Furthermore, listening to Tony Hoare's talk (https://youtu.be/ybrQvs4x0Ps?t=1682), he mentions that trying to avoid the null, in the context of a language like Java or C#, that permits them is also causing people to waste time working around them.

It's an insignificant amount of time in my experience, which makes this irrelevant in practice. I’ve never had to "work around them", this is either a theoretical argument or just sloppy programming.

> Sure, but that's not a $0 mistake. It's {time to fix * hourly rate}. Even if you're doing for OSS, you could have been spending that time doing something else.

Oh, come on, you’re smart enough to understand that a few minutes every few months may not be exactly $0, but it’s such an insignificant value that it can be treated as $0.

> Sure. But trading "ease of use" for "preventing errors" is not something I want in any language I use.

Well, you can choose your trade offs, and you can let others choose theirs. So I guess you use Ada for everything? or maybe you use Coq to prove everything? Of course you don’t, you’re also making trade offs, just different ones.

> Null/nil are about as good concepts today as silently skipping errors.

You exaggerate again. I do not agree with that framing, and I back my opinion with experience from practice.


> Oh, come on, you’re smart enough to understand that a few minutes every few months may not be exactly $0, but it’s such an insignificant value that it can be treated as $0.

You're smart enough to know that just because you sum small things, you can aggregate them over many people and a long time (circa 50 years) to get to massive numbers.

> Well, you can choose your trade offs, and you can let others choose theirs. So I guess you use Ada for everything? or maybe you use Coq to prove everything? Of course you don’t, you’re also making trade offs, just different ones.

Ada doesn't give you full memory safety. I think you need Ada.Spark. I can't find as much teaching material on it, but definitely on my radar. Also, I'm more of a Lean guy myself, but it has a different purpose than Rust. I.e. proving things.

And proofs aren't everything.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: