I think one of the problems of modern society is the level of risk people deem acceptable - its now near zero, instead of "reasonable risks".
Aside from that, culturally the value we impart on a single human life has also changed too - death used to be much more common, infant death in particular - its not uncommon to go to an old cemetery and see a single family having buried three or more children, with another 2-3 having survived to adulthood. This was not something limited to the lower classes either, Calvin Coolidge had a son who died of sepsis from a blister while he was president.
>I think one of the problems of modern society is the level of risk people deem acceptable - its now near zero, instead of "reasonable risks".
I've watched plenty of youtube videos that say something like 'But management needed dem profits so they took the risk'
So... let us not pretend we don't cut corners and take risk. There are plenty of modern deaths and environmental destruction because people take risk.
What I think should be more acceptable, is that people take personal risks. Nothing wrong with accepting risk being the first person in an unregulated prototype space ship or taking unverified medicine.
I watch those USCSB videos too and the takeaway I have is that even with these sorts of fuckups there are a single digit or low double digit number of people killed in industrial accidents each year in a country of 350 million. That suggests that we are actually pretty good at chemical safety already.
Yeah my point was more we probably don't really need more stringent industrial safety standards if we have gotten deaths due to industrial accidents down so low. The cost/benefit tradeoff isn't there anymore.
CSB videos (deliberately) aren't trying to show the full human cost of incidents. The US is full of communities that have been poisoned for decades from these sorts of things, and workers have a lot more to worry about than dying.
The regulatory and legal strangleholds we have put on modern society allows large organizations to roll the dice with abstract and diffuse risks - often without owning the liability from those choices, but often preclude individuals from taking their own personal risk assessments and deciding to take part or not - because the liability rolls someplace else (aka, you can always sue).
Are you implying that was, somehow, good? Because it was bad. Most major religions / ethical paradigms agree on this.
People, individually, should take risks if that's what they want, and it's not going to hurt others. I'm totally fine with skydiving, base jumping, rock climbing, whatever. I'm not fine with pumping chemicals into the local water table because that's the way Grandpa do.
These types of arguments are always so easy when you present everything as insanely black and white.
A thought experiment: if we could install a device which increases the likelihood of everyone surviving a car crash by 0.001%, but it costs $100,000 should it be mandated in every car? After all, this involves a victim as well.
I don’t think anyone would agree to that particular law. There is inevitably going to be a cutoff where you say “the increased safety is no longer worth the cost”. That’s acceptable risk and it’s not only good, it’s absolutely necessary to a functioning society.
What can you install? I’m talking about a fake device. It can’t be installed because it doesn’t exist. It also obviously reduces harm by 0.01% because again, this is a thought experiment meant to illustrate a point.
I literally have no idea what device you think I’m talking about
5-point harnesses, roll cages, seatbelt-ignition interlocks, fuel cells, fire extinguishers, and Hans devices for drivers and passengers would eliminate nearly all vehicle fatalities.
They exist today and can be added to passenger vehicles for much less than $100k.
Broadly, the only fatalities they wouldn't prevent are offset frontal crashes at speeds so great as to be unreasonable and vehicles that have driven off cliffs or into bodies of water (though many would be able to self-extricate).
These are all dumb devices, I'm sure you were thinking of an AI or self driving doohickey.
We've already gotten results from your thought experiment. The results are: "an arbitrary point where cost/convenience lines cross over, based on general consensus but mainly liability costs".
If $10k (what I spent to get my Miata track-ready) isn't the line, $100k ain't it either-- especially since my solution saves tens of thousands annually and yours, like, less than one (45k deaths * 0.001% = 0.45 lives saved annually).
> if we could install a device which increases the likelihood of everyone surviving a car crash by 0.001%, but it costs $100,000 should it be mandated in every car
I mean we can do that right now and we don’t.
However we do mandate rear back up camera which costs $1-2k and saves some percentage of lives when backing up.
Well yeah, that’s my point - acceptable risk is, in fact, good. If it weren’t, then there would be no resistance to that $100,000 component. But of course we still want to reduce danger, so adding an extra 2% to the cost of a car is fairly reasonable.
>However we do mandate rear back up camera which costs $1-2k and saves some percentage of lives when backing up.
A rear backup camera module, its wiring, and the screen cost less than $15 at volume for minimally-viable options and $100-300 for "good" (HD, guidance lines, proximity sensors, etc.) options.
Not $1,000-2,000.
What manufacturers charge for them is a different matter.
rear backup cameras are one of the cases where I think the math falls apart - its like 250m dollars to save approx 30 lives a year - where is does work is reduced body damage to vehicles, however I dont know thats enough to mandate them.
The biggest issue I have, is we allow large organizations to make decisions on difuse/abstract risks - often without owning the liability from those choices, but roll many liabilities up for an individual choice to an organization - its perverse, and should be the other way around.
If I do something that earns me a darwin award at work, my company probably should not be liable for it.
Not to mention the discussions around risk are too coupled to political positions / zealotry now, so they can no longer be civilly discussed. If you ever take the position of wanting to accept what you believe to be reasonable risk, it's standard practice for the opposition to slander you as an evil person that wants to kill people/babies/homeless/whoever. For example: the other person in this very comment thread that interpreted you saying we don't have acceptable levels of risk anymore as people like you wanting to poison the water table.
An illustrating example might also be the US cities like Seattle that have their "vision zero" programs. A "vision", and related actions, to get to zero traffic related deaths or serious injuries every year. It's the official position of these city governments that literally any non-zero death rate is unacceptable. No matter how low the death rate is (and it's already very low). They officially accept zero risk. Is that reasonable? Is it even about the death rate or something else? Either way, it's fully undebatable.
I think you're kind of looking at this the wrong way.
As I understand it 'vision zero' type programs are often an aspirational goal, and they implicitly recognize the near impossibility of reaching the goal, but that near impossibility doesn't make the goal undesirable or not worth striving for.
If you set such a goal you start to analyze the systems involved which cause accidents and deaths in a different way and you seek change them on a fundamental level to significantly reduce if not outright eliminate the possibility of certain categories of accidents entirely.
So instead of doing moderately effective but ultimately fleeting stuff like cracking down on speeders and drunk drivers through police action you redesign the infrastructure so that it becomes much harder for a car to physically strike pedestrians through traffic calming[0] measures, creating physically separate pedestrian and bike infrastructure so that cars just don't come in contact with people, or implement mass transit so that you simple decrease the number of drivers on the road and again physically separate them from automobiles to eliminate the possibility that they're involved in car accidents.
In doing so you not only reduce the number of accidents that injure and maim people but you induce them to be more physically active and therefore healthier so that they're able to better car accidents, slips and falls, and illnesses which ends up paying for the infrastructure improvements over the long run.
This is so much better than the alternative where speeders consistently speed and kill pedestrians in an unmarked cross walk so we decide to play whack-a-mole by increasing the police budget for photo radar for a while until the public forgets that this particular street is dangerous to cross on foot.
Very much agreed - and actually the fact that people can poison the water table is the exact problem we have, abstract and diffuse risks (tragedy of the commons type stuff) is treated one way, but the chance of risk to an individual is treated another. We worry very much about individual lead or asbestos exposure, but yet there is no system plan to clean it up. An example - We've spent lots of effort on trying to eliminate Leaded AvGas (which primarily effects users of it), but not as much on environmental lead from batteries.
Part of this is driven by the social media discourse and polarization - We essentially had a whole bunch of ideas which were outside of the window of normal discourse that have now been adopted as ideology and dogma by their respective camps. Once an idea is dogma/key ideological tenant it's really hard to challenge it.
Vision Zero should be viewed for what it is.. as a dumb idea.
The general cost over a several large codebases has been observed to be minimal. Yet, there are specific scenarios where the costs are real and observable. For those rare cases, an explicit opt-in to risky behavior makes sense.
The general cost over a several large codebases has been observed to be minimal
Is this unexpected? A large code base has a lot of other things and it is normal that such changes will be a rounding error. There are lots of other bottlenecks that will just overwhelm such a such change. I don't think "it is not affecting large code bases as much", you can use that argument for pretty much anything that adds an overhead
Not to mention if you change every int a to int a=0 right now, in those code bases, a=0 part will likely to be optimized away since that value is not being (shouldn't be) used at all and likely will be overwritten in all code paths
Yeah, exactly. If you're a developer, it can take a little bit to figure out that "docs like code" is a really strange concept to lots of non-developers.
The idea of using the same tools to manage your docs as you manage your code only makes sense if you understand what tools you use to manage code! If you don't -- if your main experience with documentation tooling is Word, or maybe MadCap stuff -- that's a really huge leap to make.
Unfortunately, operator[] on std::vector is inherently unsafe. You can potentially try to ban it (using at() instead), but that has its own problems.
There’s a great talk by Louis Brandy called “Curiously Recurring C++ Bugs at Facebook” [0] that covers this really well, along with std::map’s operator[] and some more tricky bugs. An interesting question to ask if you try to watch that talk is: How does Rust design around those bugs, and what trade offs does it make?
Thank you for sharing. Seems I still have more to learn!
It seems the bug you are flagging here is a null reference bug - I know Rust has Optional as a workaround for “null”
Are there any pitfalls in Rust when Optional does not return anything? Or does Optional close this bug altogether? I saw Optional pop up in Java to quiet down complaints on null pointer bugs but remained skeptical whether or not it was better to design around the fact that there could be the absence of “something” coming into existence when it should have been initialized
It's not so much Optional that deals with the bug. It's the fact that you can't just use a value that could possibly be null in a way that would break at runtime if it is null - the type system won't allow you, forcing an explicit check. Different languages do this in different ways - e.g. in C# and TypeScript you still have null, but references are designated as nullable or non-nullable - and an explicit comparison to null changes the type of the corresponding variable to indicate that it's not null.
I think sum types in general and Option<T> in particular is nicer. But the reason C# has nullability isn't that they disagree with me, it's that fundamentally the CLR has the same model as Java, all these types can be null, even though in the modern C# language you can say "No, not null that's never OK" at runtime on the CLR too bad maybe it's null.
For example if I write a C# function which takes a Goose, specifically a Goose, not a Goose? or similar - well, too bad the CLR says my C# function can be called by this obsolete BASIC code which has no idea what a Goose is, but it's OK because it passed null. If my code can't cope with a null? Too bad, runtime exception.
In real C# apps written by an in-house team this isn't an issue, Ollie may not be the world's best programmer but he's not going to figure out how to explicity call this API with a null, he's going to be stopped by the C# compiler diagnostic saying it needs a Goose, and worst case he says "Hey tialaramex, why do I need a Goose?". But if you make stuff that's used by people you've never met it can be an issue.
> For example if I write a C# function which takes a Goose, specifically a Goose, not a Goose? or similar - well, too bad the CLR says my C# function can be called by this obsolete BASIC code which has no idea what a Goose is, but it's OK because it passed null. If my code can't cope with a null? Too bad, runtime exception.
That's actually no different to Rust still; if you try, you can pass a 0 value to a function that only accepts a reference (i.e. a non-zero pointer), be it by unsafe, or by assembly, or whatever.
Disagreeing with another comment on this thread, this isn't a matter of judgement around "who's bug is it? Should the callee check for null, or the caller?". Rust's win is by clearly articulating that the API takes non-zero, so the caller is buggy.
As you mention it can still be an issue, but there should be no uncertainty around who's mistake it is.
The difference is that C# has well-defined behavior in this case - a non-nullable notification is really "not-nullable-ish", and there are cases even in the language itself where code without any casts in it will observe null values of such types. It's just a type system hole they allow for convenience and back-compat.
OTOH with Rust you'd have to violate its safety guarantees, which if I understand correctly triggers UB.
Rust’s Optional does close this altogether, yes. All (non-unsafe) users of Optional are required to have some defined behavior in both cases. This is enforced by the language in the match statement, and most of the “member functions” on Optional use match under the hood.
This is an issue with the C++ standardization process as much as with the language itself. AIUI when std::optional (and std::variant, which has similar issues) were defined, there was a push to get new syntax into the language itself that would’ve been similar to Rust’s match statement.
However, that never made it through the standardization process, so we ended up with “library variants” that are not safe in all circumstances.
> whether or not it was better to design around the fact that there could be the absence of “something” coming into existence when it should have been initialized
So this is actually why "no null, but optional types" is such a nice spot in the programming language design space. Because by default, you are making sure it "should have been initialized," that is, in Rust:
struct Point {
x: i32,
y: i32,
}
You know that x and y can never be null. You can't construct a Point without those numbers existing.
By contrast, here's a point where they could be:
struct Point {
x: Option<i32>,
y: Option<i32>,
}
You know by looking at the type if it's ever possible for something to be missing or not.
> Are there any pitfalls in Rust when Optional does not return anything?
So, Rust will require you to handle both cases. For example:
let x: Option<i32> = Some(5); // adding the type for clarity
dbg!(x + 7); // try to debug print the result
This will give you a compile-time error:
error[E0369]: cannot add `{integer}` to `Option<i32>`
--> src/main.rs:4:12
|
4 | dbg!(x + 7); // try to debug print the result
| - ^ - {integer}
| |
| Option<i32>
|
note: the foreign item type `Option<i32>` doesn't implement `Add<{integer}>`
It's not so much "pitfalls" exactly, but you can choose to do the same thing you'd get in a language with null: you can choose not to handle that case:
let x: Option<i32> = Some(5); // adding the type for clarity
let result = match x {
Some(num) => num + 7,
None => panic!("we don't have a number"),
};
dbg!(result); // try to debug print the result
This will successfully print, but if we change `x` to `None`, we'll get a panic, and our current thread dies.
Because this pattern is useful, there's a method on Option called `unwrap()` that does this:
let result = x.unwrap();
And so, you can argue that Rust doesn't truly force you to do something different here. It forces you to make an active choice, to handle it or not to handle it, and in what way. Another option, for example, is to return a default value. Here it is written out, and then with the convenience method:
let result = match x {
Some(num) => num + 7,
None => 0,
};
let result = x.unwrap_or(0);
And you have other choices, too. These are just two examples.
--------------
But to go back to the type thing for a bit, knowing statically you don't have any nulls allows you to do what some dynamic language fans call "confident coding," that is, you don't always need to be checking if something is null: you already know it isn't! This makes code more clear, and more robust.
I think the issue is that CSV parsing is really easy to screw up. You mentioned delimiter choice and escaping, and I’d add header presence/absence to that list.
There are at least 3 knobs to turn every time you want to parse a CSV file. There’s reasonably good tooling around this (for example, Python’s CSV module has 8 parser parameters that let you select stuff), but the fact that you have to worry about these details is itself a problem.
You said “handling data is complicated as much as the world itself is”, and I 100% agree. But the really hard part is understanding what the data means, what it describes. Every second spent on figuring out which CSV parsing option I have to change could be better spent actually thinking about the data.
I am kind of amazed how people nag about having to parse practically a random file.
Having header or not should be specified up front and one should not parse some unknown file because that will always end up with failure.
If you have your own serialization and your own parsing working yeah this will simply work.
But then not pushing back to the user some errors and trying to deal with everything is going to be frustrating because amount of edge cases is almost infinite.
Handling random data is hard, saying it is a CSV and trying to support everything that comes with it is hard.
> I think the GPL has led to fairly noticeable increase in the amount of proprietary software in the world as companies that would happily adopt a BSDL component decide to create an in-house proprietary version rather than adopt a GPL’d component.
It also aligns with my experience: my company couldn’t find an LZO compression library that wasn’t GPL’d, so the decision was between implementing one in-house or cutting the feature. We ended up restricting use of the feature to in-house use only, but opening up our core source code was never an option.
If there had been a permissive license option available, we would’ve likely donated (as we do to several other dependencies), and would’ve contributed any fixes back (because that’s easier to explain to customers than “here’s our patched version”).
I quite agree here, but to go a step further here:
Lots of people express this fear that BSD-licensing of components will lead to a world where all the big proprietary software just locks everything down. But if you actually work with large software, you end up finding out that it's actually a lot of work to maintain your own fork, and so you start pushing to get more things upstreamed. Because, after all, if your code is private, it's your task to fix it if somebody breaks it, but if it's upstreamed, it's their task to do it.
An interesting datapoint is LLVM/Clang, where, yes, lots of companies have their own proprietary forks of Clang. But for the most part, those companies still upstream gobs of stuff: most of the biggest contributors to the project are the people with the proprietary forks. Furthermore, as I understand it, basically everybody relying on EDG has, or is in the process of, worked to move off of it into Clang. Which, if EDG dies, means that the permissively-license Clang has done more to kill off proprietary compilers than copyleft-licensed GCC has.
The best defense against proprietary software is to make it too expensive for proprietary software to compete, and using a permissive license means you can trick the companies making proprietary software into helping you do that.
It's also an example of the GPL not preventing corporations from building (and releasing under a more permissive license) non-GPL software.
Before Apple built Clang, they made a GCC plugin[0] which dumped the AST from the GCC frontend, and fed it into the LLVM backend. Sure, they published the source of their GCC modifications under the GPL, but that was nothing compared to the entirely novel backend, which was released under a far more permissive license.
Meanwhile, Stallman handcuffed GNU projects like Emacs for years[1] by refusing to expose GCC's AST in an effort to prevent something like this from happening.
> Before Apple built Clang, they made a GCC plugin[0] which dumped the AST from the GCC frontend, and fed it into the LLVM backend.
But they didn't! Dragonegg is a GCC plugin, which didn't exist when work started on Clang--GCC plugins date to GCC 4.5, released in 2010, which is the same year that Clang became self-hosting.
The timeline is rather like this: LLVM originally relied on a hacked-up version of gcc 4.2 called llvm-gcc to create LLVM IR from source code. When GCC added support for plugins, there was the attempt to move that frontend to a more reliable framework, and that was Dragonegg. As far as I'm aware, Apple never contributed to the Dragonegg project itself. Dragonegg didn't last that long; by 2014, the project was completely dead. And for most of its life, the biggest interest in Dragonegg was to have some modicum of support for Fortran.
> Meanwhile, Stallman handcuffed GNU projects like Emacs for years[1] by refusing to expose GCC's AST in an effort to prevent something like this from happening.
By the time of that message, RMS's worst nightmare had come true with Dragonegg... only for Dragonegg to have already died due to a lack of demand for it.
You're right, I got Dragonegg mixed up with llvm-gcc. But my point stands – the GPL was completely ineffective at preventing someone from using it to bootstrap a new compiler infrastructure under a more permissive license.
So, the company wants us to trust them - trust that they will not take advantage off the BSDL to do things the GPL would prohibit them from doing. And we're supposed to trust them because doing so means that there would be more "hands on deck" (e.g. working on the now-canonical LZO compression library that everyone uses because it is BSDL).
Sorry, trust has been broken too often in these scenarios, and the benefits of lots more people working on the same library are not entirely clear.
I understand that many companies don't want to be a part of the pool of software licensed under the GPL - that is their right. But don't try to spin this "if only BSDL was the common one, there'd be a much bigger utilization of a different pool of software". That might even be true, but it would come with the caveat that tivoization would always be an option, which for some of us is something more significant.
It's not about trust, it's about the fact that 10% of something, or even the chance of 10% of something, is better than guaranteed 100% of nothing. You're not being taken advantage of if someone uses a BSD license the way it's supposed to be used.
If I put two choices in front of someone, the binary option of copyleft vs proprietary and they'll always go proprietary, or I give them something permissive and there's at least the chance they contribute back, the second option is strictly better. It's sort of the equivalent of having a wealth tax that raises no revenue, I'd rather have you here contribute something than move abroad and get nothing, even if the benefit is more indirect
It's not 100% of nothing. If the choice is proprietary because they legally can't freeload off FOSS due to it being GPL, the company in question still needs to pay a developer to write the alternative. That's still a positive, and nothing lost. Permissive licenses just grant to freedom to freeload, nothing more.
I don’t think it would’ve in our case, at least. I actually quite like our OSS policy: basically, we don’t want to be in the business of maintaining forks, so all changes should try to get upstreamed if at all possible. It‘s good client service as well: when our clients ask about the libraries we use, we’d much rather be able to tell them “the latest public version” than “here’s our fork”.
We also want to engage with and donate to the projects we use, mostly out of risk management: if they go unmaintained that’s bad for us.
If you truly want to upstream changes to libraries then I don’t see why lgpl code isn’t a choice for you. Use it in your proprietary code if you want, no changes no foul. If you make changes you just need to upstream or make the source available.
> If you truly want to upstream changes to libraries then I don’t see why lgpl code isn’t a choice for you.
Read LGPL a little more carefully. LGPL requires you to use it in a particular manner that lets any user swap out your version for their version, which basically means you're only allowed to dynamically link it as a shared library. Even if you make no changes to the library!
> that lets any user swap out your version for their version, which basically means you're only allowed to dynamically link it as a shared library
I had thought that the dynamic linking requirement was the only option according to the licence, but apparently not. According to 6.a. of v2 and 4.d.0. of v3 of the LGPL, it would be enough to give the user access to the source/object code of the main application and the compilation scripts under non-open-source terms, so that they can recompile and statically link it against their own version of the library.
> the GPL has led to fairly noticeable increase in the amount of proprietary software in the world
What he did:
> opening up our core source code was never an option.
It's companies that decide that open sourcing their software is 'never an option'–even the LZO compression part of it–that lead to a 'noticeable increase in the amount of proprietary software in the world'. They are just using the GPL bogeyman as a thin excuse for their own decisions.
His insinuation that if you just MIT your code so that big tech companies can use it, and if they end up needing help with it they might even hire you, seems super sleazy and borderline exploitive especially in the current job market.
Yes there exists now more proprietary software but you wanted to create proprietary software with it anyway so this kind of is what parent was talking about that some people don't want.
Partly related. Would you have paid if the project had offered a paid non GPL licence?
We (engineers) actually wanted to for another GPL’d project! But because they didn’t have a CLA, the lawyers wouldn’t sign off on it — they decided that the main/current maintainer didn’t have the rights to relicense it for us.
We probably would’ve for LZO too; not sure why that fell through.
> because they didn’t have a CLA, the lawyers wouldn’t sign off on it — they decided that the main/current maintainer didn’t have the rights to relicense it for us.
How would the legal argument be any different for MIT/etc. licensed software in that case? Would the lawyers sign off on using MIT-licensed software without a CLA? Wouldn't they make the argument that the provenance of the software and therefore its licensing is not solid? Seems like the only thing that matters is who has the right to offer a license to the software, not what the license is.
They just wanted the CLA to support the (paid) relicensing.
I think the reasoning (as it was explained to me) was that when people made their original contributions, they were agreeing to the license at that time (in this case GPL, but for other projects MIT). But the other contributors never agreed that the main maintainer could relicense their contributions for a fee.
The upshot was that we went with an in-house fully-proprietary alternative. More expensive, probably lower quality.
I've seen this personally. Google does not allow GPL code to imported into the massive monorepo. If I needed a library and the only good OSS option was GPL, I'd write it from scratch instead. I'd also usually not open source those things if this wasn't part of an already open source project, because it wasn't worth the effort.
I’m not OP, but at $WORK we sell a C++ library. We want to make it as easy as possible for clients to integrate it into their existing binaries. We need to be able to integrate with CMake, Meson, vcxproj, and hand-written Makefiles. We’re not the only vendor: if another vendor is using a specific cURL version, you better hope we work with it too, otherwise integration is almost impossible.
You could imagine us shipping our library as a DLL/.so and static-linking libcurl, but that comes with a bunch of its own problems.
That doesn't work if other teams want to apply their own cURL patches, or update as soon as upstream publishes new security fixes without waiting for you.
That's the point. We don't do that. You link to the system libcurl dynamically and everyone is told to do the same.
If you want to use a private curl as an implementation detail then the only safe way to do it is to ship a .so, make sure all the symbols are private and that symbol interposition is switched off.
If you ship a .a then the final link can always make symbols public again.
There's also a sort-of informal "standard library" of C libraries that have super-stable ABI's that we can generally assume are either present on the system or easy to install. Zlib is another one that comes immediately to mind, but there are others as well.
The no damage being caused on the surface was a “new fact”. That would never fly today, for better or for worse.