> Biggest drawback though is that it's over-optimized for matrix math ...
I think this is what inspired the creation of Julia -- they wanted a Matlab clone where for loops were fast because some problems don't fit the matrix mindset.
(Snarky way of saying: GitHub still has huge mindshare and networking effects, dealing with another forge is probably too much friction for a lot of projects)
When GitHub was bought by Microsoft, Gitlab made moving your repos to them super easy. Apparently not enough people have moved and it would seem even with sustained attacks from all kinds of different vectors, it would seem people continue to stick with them.
I use both Gitlab and Github and have yet to experience any downtime on any of my stuff. I do however, work at a large corporation and the latest NPM bug that hit Github caused enough of a stir where it basically shut down development in all of our lower environments for about two weeks so there's that.
But I do agree, and it seems like their market share increased after the Microsoft acquisition which is contrary to what I heard in all my dev circles because of how uncool MSFT is to many of my friends.
If escaping downtime is your goal, then you should aim for a service with less downtime than Github. (they're roughly the same, with Gitlab having a slightly higher percentage of "major" outages)
GitHub - Historically, GitHub reports uptime around 99.95% or higher, which translates to roughly 20–25 minutes of downtime per month. They have a large infrastructure and redundancy, so outages are rare but can happen during major incidents.
GitLab - GitLab also targets 99.95% uptime for its SaaS offering (GitLab.com). However, GitLab has had slightly more frequent service disruptions compared to GitHub in the past, especially during scaling events or major upgrades. For self-hosted GitLab instances, uptime depends heavily on your own infrastructure.
They did an n-body simulation based on the known Keplerian orbital elements. That's exactly what you're asking for, right?
Also, the formalism is the standard way astrophysicists understand collisions in gases or galaxies, and it works surprisingly well, especially when there are large numbers of "particles". There may be a few assumptions about the velocity distribution, but usually those are mild and only affect the results by less than an order of magnitude.
"N-body simulation" doesn't mean what it's normally taken to mean here.
And the colliding gasses models have the huge assumption of random/thermal motion. These satellites are in carefully designed orbits; they aren't going to magically thermalize if left unmonitored for three days.
That's why I mentioned the assumption about the velocity distribution. Sure, the velocities aren't Maxwell-Boltzmann, but that doesn't matter too much for getting a sense of the scale of the issue. The way an astrophysicist thinks (I am one) is that if we make generous assumptions and it turns out to not be a problem, then it definitely isn't a problem. Here they have determined it might be a problem, so further study is warranted. It's also a scientist strategy to publish something slightly wrong to encourage more citations.
Well, sure, they won't be thermally random, but they will be significantly perturbed from their nominal orbits, particularly at the lower orbital altitudes.
Solar flares cause atmospheric upwelling, so drag dramatically increases during a major solar flare. And the scenario envisioned in the paper is basically a Carrington-level event, so this effect would be extreme.
The current "carefully designed orbits" has a starlink sat doing a collision avoidance manuever every 1.8 minutes on average according to their filing for December 1 to May 31 of this year.
It also notes that the collision odds on which SpaceX triggers such maneuvers is 333 times more conservative than the industry standard. Were that not the case (and they were just using the standard criterion) one might naively assume that they would only be doing a maneuver every ten hours or so. But collision probabilities are not linear, they follow a power law distribution so in actuality they would only be doing such maneuvers every few days.
It is disingenuous to the point of dishonesty to use SpaceX's abundance of caution (or possibly braggadocios operational flex) as evidence that the risk is greater than it actually is.
My favorite non-mainstream language for competitions like this and Project Euler is Julia. The startup time is not a factor, and the ability to use UTF-8 symbols as variables makes the code more mathematical.
I think this is what inspired the creation of Julia -- they wanted a Matlab clone where for loops were fast because some problems don't fit the matrix mindset.