Hacker Newsnew | past | comments | ask | show | jobs | submit | Entalpi's commentslogin

Does anyone know of a more accurate economic measurement of growth rather than GDP?

Building bombs a la Russia and paying a lot for healthcare is great for GDP but are those money well spent to grow the economy?


Its like we have built our entire lives around another time system! /s

You get similar problems converting between Freedom degrees and Celsius. Its just what people have built an understanding of.


I disagree, it’s not equal but different. The French famously tried to switch to a decimal system for angles before, but failed in no small part because of the relatively few unique prime factors. Being able to divide evenly by three turns out to be more important than five.


To quote yet another time format: NTP 64-bit timestamp format (rfc8877), which is 32 bits seconds since epoch + 32 bits fixed point second fractions. (Outside of Network Time Protocol, you'll find this baby for instance in ISOBMFF ProducerReferenceTimeBox(prft)).

Here seconds are just 1/(24*60*60) of a day as expected, but the base 2 fixed point part, where a tick "is roughly equal to 233 picoseconds" makes you want to pull your hairs out if you just want to accurately express milliseconds. (Similarly for other timescales frequently used in media processing, like 90kHz, 25, 60 or 29.97)

The answer to all this is of course: hand waving — "you don't need that". Your time can be perfectly accurate in itself (ie. an accurate discrete sample of continuous time), even if no accurate conversion exists to some other time system.


You meant "Fahrenheit" rather than "Freedom", didn't you?


I think they said what _they_ meant _and_ still meant what _you_ said.


Is this a common joke?



Thanks, I've learnt something today.

Just out of curiosity: is it just a joke, or is there an idiocy behind, such as Sovereign Citizen or similar?


It's tongue in cheek or wink wink. No one uses it seriously but there is sometimes an undercurrent of resistance to what's seen as an external cultural standard. But you'll never see it used as a kind of rallying cry.


Customer choices can still lead to monopolies..


Crimea is de jure in Ukraine per international consensus. Crimea is de facto occupied by Russia. These are orthogonal statements are both valid. Everything else you listed derives from these premises.


Sign up flow is broken on mobile. :/


Had a look at the book at it seems excellent, thanks for sharing


Pretty sure from a build system perspective its quite a flex .. to be fair.


I understand the intent of the flex, but if true, it suggests there's very little public Rust outside of packages that can be downloaded from crates.io and a smallish list of alternatives.

By comparison, there's so much publicly available Python code, from so many sources, that no one can honestly say they can even find it all. The same for C++.

I've seen papers where the source code was included in the paper itself (eg, the FORTRAN code in Sibson's 1973 "SLINK" paper), or only distributed as a zip file from the author's web site, or in the supplementary data (eg, https://scholar.google.com/scholar?q=%22source+code+in+the+s... ) .

Personally, I don't think it's true. I suspect Rust changes - just like new proposed C++ changes - are checked against only easily and "well-known" accessible package.


>if true, it suggests there's very little public Rust outside of packages that can be downloaded from crates.io and a smallish list of alternatives.

You seem to be suggesting that it's a good thing that the public code is spread across so many different places that it cannot all be found. I don't see how that's an inherently good thing. It says less about the total amount of code than it does about the lack of any central resource that can be consulted.


> that it cannot all be found.

Do you think you can find all public Rust code?

Like, if I'm teaching a Rust course, and put a hello-world.rs program on my department's public GitLab instance, under an MIT license, do you think I should also put that on GitHub? And register it as a crate?

> the lack of any central resource that can be consulted.

And you say that like it's a good thing.

You want everything to be centralized on GitHub? If so, you want to force all research software developers to agree to the GitHub's terms, including those who are ardent free software advocates.

You also prevent 12 years olds from publishing their Rust source code. (GitHub's terms of service don't allow that.)

Or, do you also allow BitBucket [1], and GitLab [2]?

[1] https://bitbucket.org/project_samar/samar_lite/src/master/ contains two Rust programs, neither on crates.io

[2] https://gitlab.com/rouault-team-public/analysis/umaprs

What about department instances of GitLab? [3]

https://gitlab.anu.edu.au/mu/mu-impl-fast/-/tree/rtmu-dev

It really doesn't seem like it's all that easy to find all publicly available Rust code.


What bearing does any of this have on the previous thread of discussion?

Why do you think a 12 year old needs to publish their "hello world" programs because of Crater? The purpose of Crater is uncovering subtle compiler regressions. If "hello world" is ever broken then it would likely be discovered by the standard test suite or generally long before the Crater run.

This isn't a matter of "allowing" anything. It's just a statement that yes a Crater run does test all meaningful publicly available code, where "meaningful" at the very least means code which is consumed via crates.io. Sure, there is very likely public code that exists elsewhere which Crater cannot find, and that's OK. The point is that a Crater run coming back clean means something, because a very very wide swath of code was tested.


What is "the previous thread of discussion?"

My response was all of 5 lines, saying that if dthul's comment were true, then it implies that Rust has a rather small code base.

And indeed, Crater does not test all publicly available Rust code. ("Not all code is on crates.io! There is a lot of code in repos on GitHub and elsewhere", and only for "Linux builds on x86_64", not Windows, says https://rustc-dev-guide.rust-lang.org/tests/crater.html).

Rust is much bigger than dthul's comment implies.

You may well be correct when adding the qualifier "meaningful", but that's a different thread of discussion.

> Why do you think a 12 year old needs to publish their "hello world" programs because of Crater?

I mentioned that because you changed the thread of discussion to discuss centralized vs. decentralized code distribution.

> because a very very wide swath of code was tested.

And C++ language developers also analyze a 'wide swath of code' - millions of lines or more - for changes.


To be fair to the original argument, I think it's important to understand that there is next to no Rust code in comparison to the amount of C++ code out there. It has almost no projects in comparison, and those projects are much, much smaller. I don't think that's a very controversial statement, because it's very obviously true.

Now, it's also important to keep in mind that C++ has a terrible story when it comes to centralized (or otherwise, really?) repositories for packages, so the corresponding system for C++ is at the moment completely infeasible and not at all useful. That doesn't really make the Rust code that's tested against any more meaningful in comparison to the vast amounts of C++ code out there, though.

Edit:

At the kind of pointless and debilitating scale that C++ exists and then with the relationship C++ has with packages and dependency management this entire idea is basically impossible.


Rather than hypothesising about an imagined tool you could look at the actual tool which of course is in Rust's source code repo: https://github.com/rust-lang/crater

> new proposed C++ changes - are checked against only easily and "well-known" accessible package.

Now that I have, so to say, shown you mine, lets see yours. Where is the tool to perform these checks in C++?


Thank you for showing that I was right to in my belief: 'I suspect Rust changes - just like new proposed C++ changes - are checked against only easily and "well-known" accessible package.'

My point is that dthul's comment "they usually test it against all publicly available Rust code" implies Rust has a very small user base. Since crater runs only against "parts of the Rust" - those available on GitHub and crates - it implies a rather larger ecosystem.

As for "mine" - what I know about C++ development comes from reading links posted to HN; hardly "mine" in any meaningful sense. I also don't accept your wording "these checks", because my point is that similarly useful checks are done, not exactly identical tests. I wrote 'FWIW, the C++ standards developers use do use code search tools to help identify possible breakage.'

From previous readings, I know they do code surveys, and experiments using existing code bases and compilers.

For examples, there's https://codesearch.isocpp.org/ ("developed for ISO Standard C++ proposal authors in order to explore existing C++ practice and to provide empirical evidence to support claims about existing practice made in proposals.") done in surveys to understand how code is used. For example, https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2019/p14... .

At https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2019/p11... they used a custom tool to analyze Boost, Chromium, Firefox, the Linux Kernel, Libreoffice, LLVM, and Qt: "Estimated 30 to 80 millions LOC compiled".


I don't see "We sometimes do some ad hoc checks including looking for stuff with code search" as "similarly useful" to using proper test automation at all.

And I think the results continue to speak for themselves.


"Estimated 30 to 80 millions LOC compiled" sounds more than code search, yes?

Don't confuse my ignorance of the process for lack of process.

https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p27... describes a proposal to "zero-initialize all objects of automatic storage duration", with a test-implementation as an "opt-in compiler flag", and tested on "The OS of every desktop, laptop, and smartphone that you own; The web browser you’re using to read this paper; Many kernel extensions and userspace program in your laptop and smartphone; and Likely to your favorite videogame console."

Or from https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2015/n43... "To assess how common these cases are likely to be in practice, we conducted a ClangMR analysis of a codebase of over 100 million lines of C++ code, identifying every location where a std::function is given a new target".

"Proper" and "ad hoc" have very strong personal components. Is it proper or ad hoc that Crater only tests public code, while C++ developers have access to large private code bases ("the OS of every desktop") for carrying out their tests?

Is it proper or ad hoc that Crater only checks crates.io and some GitHub repos?

Is it proper or ad hoc that Crater doesn't test under Microsoft Windows?

As for the results, what will Rust language development look like when there's 10 billion lines of Rust code, and only a tiny fraction of it is visible?


> "Estimated 30 to 80 millions LOC compiled" sounds more than code search, yes?

Does it? Your belief is that the authors wrote two compilers (C and C++ because these codebases are in two different languages) with these features they're not proposing and don't think should be used, in order to actually compile this code and check it works - but alas although they had to do all this complex compiler internal work they didn't find time to have the frontend parser count the lines of input ?

"They just used code search and estimated" doesn't sound infinitely more likely to you?

> Don't confuse my ignorance of the process for lack of process.

Your ignorance certainly plays a role, but I don't see process.

P2723 is talking about widespread experience in real systems, but it's not a "test" implementation, it's just widespread real world tooling because this is a real world safety hazard regardless of whether C++ ever fixes it. -ftrivial-auto-var-init is the name of the Clang and GCC flag for example. That's how they can be confident it's used by "The OS of every desktop, laptop and smartphone you own" - it's one of the early checklist items that OS vendors have to slightly improved their C and sometimes C++ programs at very low cost.

Microsoft's team actually gave a talk about landing their equivalent, they had to fight harder because inside a proprietary codebase turns out even more C++ programmers mistake their ignorance for competence, and thus are convinced the C++ standard is correct here and such mitigations are at best a waste of time and at worst actively destructive. Also their optimiser is apparently terrible, which if you've used MSVC checks out.

Thus this C++ proposal is, like in "days of yore" just citing existing real world use.

The C++ developers don't actually have direct access to other people's code. JF Bastien (the paper's author) used to work for Apple, so it's possible he's actually seen Apple's teams using this flag, but either way Apple have announced that they do so. Microsoft publicly talked about using their equivalent for Windows, and the Linux vendors advertise that they have such mitigations. Anecdotes. To insulate this proposal (not very effectively it turned out) against people who insist the price of this change is too high to be feasible.

It turns out that in C++ land "We actually did this and it works" does not trump "I don't think it would work"

N4348 is talking about, and indeed cites, Google's experience with its own code using a smarter "refactoring" tool that Chandler and Hyrum have talked about publicly on several occasions. This is slightly fancier than code search, but it's still very much ad hoc which is why this gets mentioned once in that paper but isn't in the others you looked at.

When a tool systematically does the same thing, over, and over, that's anything but ad hoc.

In some ways you should expect Rust code to grow more slowly. If you ask that Code search guy from your previous comment, he'll tell you that a lot of C and C++ software has big machine generated data files as "source code". Until C23 there is no #embed whereas Rust has from the outset offered std::include_bytes! which is what you'd want instead of #embed if you weren't fighting neanderthals (Jean-Hyde sounds exhausted by the experience)

However over time of course software grows, and the more powerful, safer abstractions in Rust are expected to encourage that, so sure, 10 billion lines of Rust, I'm not sure why that's such a milestone. No I don't expect big changes as a result.

Did the documents you reviewed make you think the hidden C++ is so much different than the piles of it that are available in a public code search? Was that the message you received?


> 30 to 80 millions LOC compiled

I figured it was because "line of code" is not all that meaningful, and not worth specifying more precisely than that.

Does it include comments? Is it after macro expansion? What about \ continuations? Does a bare "}" on its own count as a line of code?

BTW, how many LOC does Crater run in a full test, and how long does it take/how expensive is a run? I failed to find that information.

> The C++ developers don't actually have direct access to other people's code

I don't know what you mean by that. They certainly have access to public source code, just like Rust developers do. (Chromium, LLVM, Boost are mentioned in https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2019/p11... ).

It would seem very odd if Microsoft's representatives had no idea how changes to C++ would affect internal Microsoft code. I strongly suspect VC++ changes/extensions are tested against in-house Microsoft code bases before making their way to the standard, because it makes no sense to undermine your own systems. For the same reason, I suspect proposed changes are tested internally at Microsoft.

And from papers like https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2020/p21... I know there is in-house experience of proprietary code bases guiding how the C++ standard changes.

> Did the documents you reviewed make you think the hidden C++ is so much different than the piles of it that are available in a public code search? Was that the message you received?

That's not really my point. (Indeed, as that last paper link from Bloomberg points out, "It is our understanding that Bloomberg’s experience is not dissimilar to most Free/Libre Open Source Software communities".) Instead:

1) How much of the "very very wide swath of code" is meaningful, in terms of language feedback? That is, how much of the automation being employed because it's there, rather than because it's useful?

If an automated method checks 500M LOC but the interesting cases only ever come from the same set of 1M LOC, wouldn't reducing the working set help with turnaround?

(Indeed, https://ethz.ch/content/dam/ethz/special-interest/infk/chair... uses Crater to look at only the 500 most used crates, implying they think using an ad hoc subset is sufficient for their purposes.)

(Incidentally, it's hard to find any published scholarly papers on Crater. There's a lot of rust, both the iron and plant kind, in terrestrial craters!)

2) Would a C++ equivalent for Chromium, Qt, LibreOffice, KDE, Firefox, and a few dozen well-known large packages give the same feedback for C++? Why or why not?

If not, would ~100 packages be enough? What about ~1,000?

3) How do you know that Rust compilation of the packages on crates.io, only for x86-64 Linux, give better feedback for the types of issues that C++ faces, than the "ad hoc" methods they use for C++?

That is, just because a tool fits Rust's needs and goals doesn't mean it fit's the C++ spec developers needs and goals.

4) How would a tool like Crater help in a possible future where there are a dozen different and competing Rust implementations?

That is, https://blog.m-ou.se/rust-standard/ argues there doesn't need to be a standards committee for Rust because there is only one Rust implementation, with tools like Crater to help maintain compatibility. I'm familiar with this viewpoint as I come from the Python world; while there are alternative Python implementations, they all look to CPython as the reference language.

But in C++ there are many C++ vendors, some with economic incentive to have new features which might break old code, but which their customers will pay for. On the other hand, their customers have the economic inventive to prevent vendor lock-in. Hence, a standard.

If a hypothetical EESMith Rust drops a few rarely used features to give a 2x run-time performance gain and 5x compilation performance gain, then you can bet that people will switch to it. But is that Rust? And will mainline Rust still preserve backwards compatibility even in the face of competition?

> I'm not sure why that's such a milestone

Do you expect Crater to scale to compile 10 billion lines of Rust in a reasonable time and cost? Or will Crater drop testing most packages by then?

> Jean-Hyde sounds exhausted by the experience

Developing a C++ standard with multiple entrenched and sometimes competing vendors is no easy task. Rust doesn't have to deal with it ... yet.


> BTW, how many LOC does Crater run in a full test, and how long does it take/how expensive is a run? I failed to find that information.

I have nothing more than a finger in the air estimate for LOC, maybe hundreds of millions?

I have never watched a "full test" like for a release build, I believe those take several days - but when Crater is asked just to build everything that takes a little under 24 hours with its current footprint.

> I strongly suspect VC++ changes/extensions are tested against in-house Microsoft code bases before making their way to the standard, because it makes no sense to undermine your own systems.

Surely it stands to reason that if Microsoft are proposing standardisation of a feature they've shipped in MSVC, that's also a feature they've tried using? This model of ISO C++ features (which the developer of Circle also prefers) maps much better to what was initially envisioned than today's reality however. Most C++ proposals today are not submissions of existing compiler features from the big three compilers (MSVC, GCC and Clang) but instead fresh before the committee, often with no implementation experience at all.

That's certainly one way to do it, after all Rust contributors don't have their own Rust compiler either, but it means you need very different tooling.

1) Breadth matters much more than depth for finding surprises which is the thing you won't get with an ad hoc approach. Going from 10% of some big corporate code base to 20% won't make anywhere near the difference you get from adding a hundred one-man-band projects that are smaller even in total, because different stylistic and idiomatic choices make so much more practical difference for this work.

2) As a result "a few dozen" won't cut it. Try all the C++ on github, that seems like a much better place to start.

3) Sure, the primary goal of WG21 proposers is to get into the IS - it would be nice if what they've proposed actually works, but ultimately if it doesn't work that can be fixed later, whereas if it's not adopted then it doesn't matter whether it would work.

Arguably there have never been any versions of the C++ IS which actually describe a complete working programming language, so it's not terribly important that if it were such a system it would be correct, still there's a preference for fewer rather than more horrible gotchas.

I mentioned #embed so that's a useful example here, C++ 23 doesn't standardize #embed. So in theory C++ code can't use #embed, that's not C++. But of course in reality the vendors are going to ship a pre-processor which handles #embed, they don't care, so it'll work and it's widely expected you will be able to use it even in older C++ verisons.

4) If there was a specification then a tool like Crater might be somewhat helpful for that, but I expect that most effort would remain focused on a single implementation, today that is of course the Rustc compiler with its LLVM backend.

The hypothetical EESmith Rust sounds spurious to me, how could it deliver 2x run-time performance by removing "rarely used features" ? I don't think spurious hypotheticals are a good use of anybody's time.


> I have nothing more than a finger in the air estimate for LOC, maybe hundreds of millions?

And if there were a number, it would reflect only one of several ways to quantify "LOC", right? Resulting in a spread of numbers that could meaningfully be described as "LOC"?

> Breadth matters much more than depth for finding surprises which is the thing you won't get with an ad hoc approach

I would be quite interested in someone doing a research publication on this topic!

Give the history of using crater, which packages have proved most useful? Do the same core packages prove useful over time, or does the most significant subset change wildly? What does the cumulative distribution plot (#packages until time of appropriate feedback) look like?

How worthwhile is the additional breadth from crates.io + GitHub vs. just crates.io? Is it worthwhile to also include GitLab, and what are the tradeoffs (eg, additional compute costs, additional false positives).

For that matter, how useful would it be to add Linux+ARM to the current Crater tests? Or Microsoft Windows? If breadth is that important, then why skip out on the full set of Rust code you have available?

> As a result "a few dozen" won't cut it.

I did follow up with "If not, would ~100 packages be enough? What about ~1,000?" :)

If there's no equivalent of a dose-response curve / ROC curve / price-performance curve, and the answer is "must try everything" then how do I know the extra effort is useful, rather than FOMO-driven anxiety?

> Try all the C++ on github

Assuming there was a single way to build all C++ code - how much do you think it would cost to compile all the C++ code on GitHub? And why do you think it the additional cost would be worthwhile to C++ standards development?

> but I expect that most effort would remain focused on a single implementation,

Oh, given my experience with Python implementations, I agree!

But my point is processes change when you have multiple competing commercial vendors, which C++ has. So looking at how Rust does things doesn't mean it's also appropriate for C++.

> I don't think spurious hypotheticals are a good use of anybody's time.

Okay, something more practical. C++11 broke backwards compatibility by changing how 'auto' works. "auto int i;" used to be valid, now it's an error. This is a huge boon for usability. It's a trivial syntactic change to fix old code, and long experience shows the old "auto" storage class was rarely used.

How would the systematic compilation of all C++ code on GitHub (assuming that were possible) affect that decision more than the ad hoc methods they did use to make that decision?

Will there really never be something in Rust were a simple breaking change of a rarely used feature can result in an easier-to-use language?

If there can, then you may have a schism, either temporary (gcc vs egcs fork) or more permanent (Perl5/Perl6/Raku). Which will be "Rust"?

The answer is legally quite clear. The Rust Foundation has the trademark to "Rust" (serial number 87796977). My version can't break backwards compatibility, even as a fork, so would have to call it, perhaps, "Verdigris". (As I recall, someone started to develop a "Python 2.8" with more backports from Python 3; the PSF got after them for using the Python trademark that way.)

C++ doesn't have trademark protection, so the legal concept of what is/is not C++ are also different than Rust.


> How would the systematic compilation of all C++ code on GitHub (assuming that were possible) affect that decision more than the ad hoc methods they did use to make that decision?

I doubt it would affect the actual decision at all, WG21 has been very comfortable relying on gut instinct, even in the face of reality, so there's no reason they'd be affected by the results of more systematic testing.

> Will there really never be something in Rust were a simple breaking change of a rarely used feature can result in an easier-to-use language?

Now we're talking about something woollier than your performance hypothetical. Surely almost any change can be sold as "easier-to-use" if you're motivated. Herb Sutter seems motivated for example, every CppCon he has a proposal for how to make C++ "easier to use" by further complicating it. An immediate caution though is, in what way is it "easier-to-use" half of a fractured ecosystem ? The other half is no longer available to you, that's certainly not easier to use than before.

Rust programmers aren't used to taking such deals because Editions have been leveraged to give them better alternatives without the compromise.

This promise got stronger over time, rather than weaker as you seem to expect. There's complicated Rust 1.0 era code (e.g. early ripgrep) which doesn't even build today on a current compiler, because something it did is wrong and Rust 1.0 compiler didn't spot that but modern ones do - back then it was less likely they'd see the compatibility break as a big deal, it was "just" a bug fix.

C++ compilers fix those sort of bugs all the time even today. Rust wouldn't take those fixes so easily, modulo crater measurements, but as you've shown C++ doesn't have that.


> so there's no reason they'd be affected by the results of more systematic testing.

Let's go back to the g'parent comment that started this branch, at https://news.ycombinator.com/item?id=36387994 .

muxator wrote "the author of this little proposal (officializing "_" as a no name placeholder) had to perform a thorough research to show that this change would not break existing code".

What was that "thorough research"? The paper doesn't mention it, but does imply there was a code search to find the examples it listed.

I assume you think that research was also "gut instinct", rather than "thorough research". Is that only because it did not do full compilation of all C++ code on GitHub, or is there something more seriously wrong with that research?

Further, while you wrote "Most C++ proposals today are not submissions of existing compiler features from the big three compilers (MSVC, GCC and Clang) but instead fresh before the committee, often with no implementation experience at all", that specific spec says it was implemented in Clang.

It therefore seems like the proposal which kicked off this long thread is a counter-example to your characterization of C++ language development.

You have not addressed my question - how do you know the extra effort in a full Crater run of all crates in crates.io + GitHub is useful, rather than primarily FOMO-driven anxiety?


> I assume you think that research was also "gut instinct", rather than "thorough research".

My goodness no. "Gut instinct" is how the decisions are made, but the research you're talking about was made for a proposal paper. There are different incentives in play.

For the proposer the incentive is to get something to show for the enormous effort expended in making a proposal - usually months, sometimes years, across dozens of meetings and discussions and presentations. It's soul-destroying stuff. Ideally the sub-committees you're seeing would approve your work and it can go to another committee, more likely they will have suggestions for how it could be altered so as to satisfy them, and after a few iterations that can result in approval of a subsequent revised document, often they just have open-ended questions for you, which perhaps might be satisified in some future proposal document, by answering the questions somehow, or they just aren't interested and you're told to go away.

A show of your extensive research might make it easier to achieve your goal. You have an incentive to make this research seem as comprehensive as possible for that purpose in support of your goal.

But the people making the decision don't have that incentive. They could - in principle - spend hours on reading all the work you did, they could - in principle - replicate that work or even do their own research. In reality they are probably thinking about whether they can break early or move on to something they care about more. I would summarise their reasoning as gut instinct. Does this sound like something we should do? Maybe not. Straw poll question: Do we want this? Vote Against, nothing personal.

I mentioned JeanHyde before. JeanHydge has seen how this sausage is made, be sure to read his experience and think about it carefully before believing any fairy tales you've heard or any imagined process. Remember, the essence of JeanHyde's proposal was just this: 1) It would sure be nice to use blobs of binary data in my programs. 2) The existing ways to achieve this are garbage - so we need a new one.

JeanHyde spent years defending basic obvious stuff in front of people strongly motivated to believe he's wrong since that's just easier than doing any work. At its most basic the question, is, given a lot of bytes of data in a file, or a lot of ASCII hexadecimal values written as C literals, which can be processed more quickly ? The committee was strongly motivated to insist the answer was the ASCII hex, even though JeanHyde had tables showing the raw data is much faster.

The committee hallucinated into existence rules like JeanHyde's proposal can't be in the standard unless there are working implementations. If you're wondering why your C++ compiler didn't have a complete C++ 20 implementation in 2021 you might be surprised to hear that there is such a rule -- that's because there is no such rule, it's an excuse.

Another hallucinated rule is very amusing to Rust programmers. WG21 would like to believe that C++ compilation doesn't result in executing code. So, if Bob makes a malicious C++ program, sure, running the program might be bad, but certainly compiling it is fine. This belief is laughable, but laughing at them won't get your proposal accepted, so you must try to navigate the fantasy world they live in, where their C++, which doesn't have this capability, can accept your proposal, without introducing the capability C++ already has. It's like you're playing Mornington Crescent with opponents who believe there are rules and they know what they are. Terrifying.

And so it isn't in C++ 23. The C++ 23 standard doesn't have JeanHyde's proposal. WG14 took #embed for C23, so C23 does have it, and of course in reality C++ programmers can expect to benefit from that, and that's the awful, miserable reality you're defending.

> how do you know the extra effort in a full Crater run of all crates in crates.io + GitHub is useful, rather than primarily FOMO-driven anxiety?

It periodically finds problems. And Rust is equipped to deal with those problems so the forewarning is practically useful.

In C++ if a syntax change breaks some fraction of programs well, too bad. I guess it would be nice to know, but as you saw the committee might (or might not) do it anyway. In Rust, that can be handled via the Editions mechanism. But to do that you need to know about it before you ship the compiler with the syntax change, so as to mark it as applying only to the future edition you're adding it to.


Give away? You are selling it on a global market and buying on the very same market.


This. We need nuclear to keep the base load green. Wind and solar need batteries to be base load providers. Building out both battery and renewable infrastructure at the same time is a tall order.


Yeah this is what most people are not getting in this whole debate. That and the fact that it's really costly to transmit electricity over long distances, even at extremely high voltages.

The best solution is a bunch of smallish nuclear reactors spread out over the continent for base power production, wind/solar/hydro (is wave electricity a thing? I remember it was discussed a bunch of years ago in relation to green production) wherever those make sense, and high voltage transmission lines to allow load balancing.

But we don't have that, and it takes a long time to build so we should get started now, but instead they will say "oh we should have built that 20 years ago now it's too late"... :(


The surprising thing is that with a rare exceptions (like in Finland) people who have money to invest don't believe that investing in nuclear power will actually make them a profit.

We know the price of CO2 will just go up and up. We know how hard it is to create a base load with wind/solar and storage.

Yet nobody with money believes it is worth investing in nuclear.

Of course, looking at Hinkley Point C, Olkiluoto, Flamanville makes it very clear why nobody wants to invest in nuclear power.


I think the biggest reason is public opinion (by idiots who don't know what they are talking about and just think nuclear bad because they've been conditioned to think so by decades of misinformation) driving politics in the wrong directions possibly meaning that any plants built may be forced to shut down before they've been operational long enough to offset the construction costs and become profitable.


Finally.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: