Hacker Newsnew | past | comments | ask | show | jobs | submit | WorldMaker's commentslogin

It gets the name from "rugby football" from the time where both rugby and international football ("soccer") were considered related sports and often shared rules and associations (such as the London Football Association, from which Princeton imported rules to the US, and which eventually started to split as the sports split further).

The change in shape from a round ball to the "handegg" eventually derived from the first American innovation in "rugby football" of the forward pass. Even with the forward pass, the game required kicks into goals for a long while with the "touchdown" a further later innovation (though also influenced by reimporting rugby rules, as it relates to the rugby "try"). Kick offs, punts, field goals, and extra point attempts all still vestigially remain from the rugby origins even as most of the play in between them changed drastically.

American Football is called "football" because it evolved from the "football family". It's like using the term "romance language": Spanish and Italian sound very different today, but they both share roots in Latin. They've also both changed a lot since the days when Latin was a living language.


Some games get packaged with other emulators beyond dosbox as well. GOG includes and configures ScummVM for some applicable games, for instance.

It's definitely not 0 because rebase heavy workflows involve the rerere cache which is a minefield of per-repo hidden merge changes. You get the results of "criss-cross merges" as "ghosts" you can't easily debug because there aren't good UI tools for the rerere cache. About the best you can do is declare rerere cache bankruptcy and make sure every repo clears their rerere cache.

I know that worst case isn't all that common or everyone would be scared of rebases, but I've seen it enough that I have a healthy disrespect of rebase heavy workflows and try to avoid them when given the option/in charge of choosing the tools/workflows/processes.


To be honest I've used rebase-heavy workflows for 15 years and never used rerere, so I can't comment on that (been a happy Jujutsu user for a few years — I've always wondered what the constituency for rerere is, and I'm curious if you could tell me!) I definitely agree in general that whenever you have a cache, you have to think about cache invalidation.

rerere is used automatically by git to cache certain merge conflict fixes encountered during a rebase so that you don't have to reapply them more than once rebasing the same branch later. In general, when it works, which is most of the time, it's part of what keeps rebases feeling easy and lightweight despite capturing in the final commit output sometimes a fraction of the data of a real merge commit. The rerere cache is in some respects a hidden collection of the rest of a merge commit.

In git, the merge (and merge commit) is the primitive and rebase a higher level operation on top of them with a complex but not generally well understood cache with only a few CLI commands and just about no UI support anywhere.

Like I said, because the rerere cache is so out-of-sight/out-of-mind that's why problems with it become weird and hard to debug. The situations I've seen that have been truly rebase-heavy workflows with multiple "git flow" long-running branches and even sometimes cherry picking between them. (Generally the same sorts of things that create "criss-cross merge scenarios".) Rebased commits start to bring in regressions from other branches. Rebased commits start to break builds randomly. If what is getting rebased is a long-running branch you probably don't have eyes on every commit, so finding where these hidden merge regressions happen becomes full branch bisects, you can't just focus on merge commits because you don't have them anymore, every commit is a candidate for a bad merge in a rebased branch.

Personally, I'd rather have real merge commits where you can trace both parents and the code not from either parent (conflict fixes), and you don't have to worry about ghosts of bad merges showing up in any random commit. Even the worst "criss-cross merge" commits are obvious in a commit log and I've seen have had enough data to surgically fix, often nearly as soon as they happen. rerere cache problems are things that can go unnoticed for weeks to everyone's confusion and potentially a lot of hidden harm. You can't easily see both parents of the merges involved. You might even have multiple repos with competing rerere caches alternating damage.

But also yes rerere cache problems are so generally infrequent that it might also take weeks of research, when it does happen, just to figure out what the rerere cache is for, that it might be the cause of some of your "merge ghosts" haunting your codebase, and how to clean it.

Obviously by the point where you are rebasing git flow-style long runnning branches and using frequent cherry picks you're in a rebase heavy workflow that is painful for other reasons and maybe that's an even heavier step beyond "rebase heavy" to some, but because the rerere cache is involved to some degree in every rebase once you stop trusting the rerere cache it can be hard to trust any rebase heavy workflow again. Like I said, personally I like the integration history/logs/investigatable diffs that real merge commits provide and prefer tools like `--first-parent` when I need "linear history" views/bisects.


You have to turn rerere on, though, right? I've never done that. I've also never worked with long-running branches — tend to strongly prefer integrating into main and using feature flags if necessary. Jujutsu doesn't have anything like rerere as far as I know.

Hmm, yeah looks like it is default off. Probably some git flow automation tool or other sort of bad corporate/consultant disseminated default config at a past job left the impression that it was default on. It's the solution to a lot of papercuts working with long-running branches as well as the source of new problems as stated above; problems that are visible with merge commits but hidden in rebases.

I like iOS' "Notification Summaries" as a compromise. It's about as close to "turn all these notifications into a regularly scheduled newspaper" as we can get.

I still cull notifications that I don't think provide value (notifications are a privilege based on trust and apps that break that trust lose that privilege), but yeah even when I get notifications I only really get them once every 4 hours or so, and that's nice.


> Reading your article again, I wonder whether your compiler is just not doing the right things to take advantage of the performance boosts available via CoreCLR?

> E.x. can you do things like stackalloc temp buffers to avoid allocation, and does the stdlib do those things where it is advantageous?

The C# standard lib (often called the base class library or BCL) has seen a ton of Span<T>/Memory<T>/stackalloc internal usage adoption in .NET 6+, with each release adding more of them. Things like File IO and serialization/deserialization particularly see a lot of notable performance improvements just from upgrading each .NET version. .NET10 is faster than .NET9 with a lot of the same code, and so forth.

Mono still benefits from some of these BCL improvements (as more of the BCL is shared than not these days, and Blazor WASM for the moment is still more Mono than CoreCLR so some investment has continued), but not all of them and not always in the same ways.


> The C# standard lib (often called the base class library or BCL) has seen a ton of Span<T>/Memory<T>/stackalloc internal usage adoption in .NET 6+, with each release adding more of them. Things like File IO and serialization/deserialization particularly see a lot of notable performance improvements just from upgrading each .NET version. .NET10 is faster than .NET9 with a lot of the same code, and so forth.

I worded my reply poorly, mostly in that I meant 'If Oberon has it's own stdlib, is it doing the modern performant practice' ?


Sunk cost fallacy will be a big factor. They already invested a lot of money/time into customizing Mono and hacks like Burst and IL2CPP, so there's momentum to "stay the course" and continue investing in those. Even if some evidence suggests that it is the wrong course.

It's still not JS-level/JS-compatible GC (yet?) and it is still quite low level (more about corralling buffers of bytes than objects, a bit closer to OS-level page management than JS or C# level GC), as it is intended to be lower level than most languages need so that different languages can build different things with it. It is also a tiny stepping stone to better memory sharing with JS APIs (and the eventual goal of WASM "direct DOM"), but still not quite finished on that front as more steps remain.

In theory yes, IL2CPP doesn't need to exist with modern .NET AOT support. In practice, per quotes in the article Unity may have a bit of a sunk cost issue and has no plans to support .NET AOT, only IL2CPP.

Some of that sunk cost may be the above mentioned pointer issue and not enough current plans for a smarter FFI interface between C++ and C#.


"Vote with your wallet" implies that the rich deserve more votes. Individual action in dollars per vote simply can't matter against the rivers of wealth in ad spend and investors. It's not just a flawed strategy, but sometimes believing in "vote with your wallet" signifies consent or at least complicity that the advertiser buying a lot of ads or the rich idiot with a lot of money invested in gaining your private data "should" win.

We need far better strategies than "vote with your wallet". I think it is at least time to get rid of "vote with your wallet" from our collective vocabularies, for the sake of actual democracy.


Sayings like "Vote with your wallet" come about as a byproduct of living in an economic system that is on its face democratic and capitalist yet somehow still concentrates political and market power in the hands of a few.

If something is bad, it's said that the free market will offer an alternative and the assumed loss of market share will rein in the bad thing. It ignores, as does most un-nuanced discourse about economy and society, that capitalism does not equate to a free market outside of a beginner's economics textbook, and democracy doesn't prevent incumbents from buying up the competition (FB/Instagram) or attempting to block competition outright (Tiktok).


It might not even be an actual account. Tiktok does the LinkedIn and Facebook "growth hack" thing of pushing users to let it slurp their entire address book on their phones. One of the reasons that it "requires" a phone number is to do address book graphing. Tiktok will send the emails it collects "Hey, your friend X is on Tiktok" to try to drive more accounts. All it takes is one friend/acquaintance to click yes on "Allow Access to Contacts" and your email address is considered fair to spam "on behalf of" your friends.

Social media was a mistake.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: