Hacker Newsnew | past | comments | ask | show | jobs | submit | more queuebert's commentslogin

> Biggest drawback though is that it's over-optimized for matrix math ...

I think this is what inspired the creation of Julia -- they wanted a Matlab clone where for loops were fast because some problems don't fit the matrix mindset.


Genuine question: What else would it do? The mouse trail is a history of coordinates, so that should be linear, right?


At this point, is there any downside to switching to GitLab?


What’s gitlab?

(Snarky way of saying: GitHub still has huge mindshare and networking effects, dealing with another forge is probably too much friction for a lot of projects)

Not that GitHub doesn’t suck…


When GitHub was bought by Microsoft, Gitlab made moving your repos to them super easy. Apparently not enough people have moved and it would seem even with sustained attacks from all kinds of different vectors, it would seem people continue to stick with them.

I use both Gitlab and Github and have yet to experience any downtime on any of my stuff. I do however, work at a large corporation and the latest NPM bug that hit Github caused enough of a stir where it basically shut down development in all of our lower environments for about two weeks so there's that.

But I do agree, and it seems like their market share increased after the Microsoft acquisition which is contrary to what I heard in all my dev circles because of how uncool MSFT is to many of my friends.


Is it any better ?

We had that last year, with the full premium stuff ("pay as much as we can" mindset)

Please see this: a basic feature, much needed by lots of people (those who are stuck on azure ..): https://gitlab.com/gitlab-org/gitlab/-/issues/360592

Please read the entire thread with a particular attention to the timeline


If escaping downtime is your goal, then you should aim for a service with less downtime than Github. (they're roughly the same, with Gitlab having a slightly higher percentage of "major" outages)


Is the uptime any better?


Not really:

GitHub - Historically, GitHub reports uptime around 99.95% or higher, which translates to roughly 20–25 minutes of downtime per month. They have a large infrastructure and redundancy, so outages are rare but can happen during major incidents.

GitLab - GitLab also targets 99.95% uptime for its SaaS offering (GitLab.com). However, GitLab has had slightly more frequent service disruptions compared to GitHub in the past, especially during scaling events or major upgrades. For self-hosted GitLab instances, uptime depends heavily on your own infrastructure.


Base 8 numbering system?


I'm glad to see that a government elected by rural, blue-collar workers is tackling the issues those workers care most about.

/s


In Rust, could you instead use a templated struct wrapping a function pointer along with #[repr(C)]?


They did an n-body simulation based on the known Keplerian orbital elements. That's exactly what you're asking for, right?

Also, the formalism is the standard way astrophysicists understand collisions in gases or galaxies, and it works surprisingly well, especially when there are large numbers of "particles". There may be a few assumptions about the velocity distribution, but usually those are mild and only affect the results by less than an order of magnitude.


"N-body simulation" doesn't mean what it's normally taken to mean here.

And the colliding gasses models have the huge assumption of random/thermal motion. These satellites are in carefully designed orbits; they aren't going to magically thermalize if left unmonitored for three days.


That's why I mentioned the assumption about the velocity distribution. Sure, the velocities aren't Maxwell-Boltzmann, but that doesn't matter too much for getting a sense of the scale of the issue. The way an astrophysicist thinks (I am one) is that if we make generous assumptions and it turns out to not be a problem, then it definitely isn't a problem. Here they have determined it might be a problem, so further study is warranted. It's also a scientist strategy to publish something slightly wrong to encourage more citations.


Well, sure, they won't be thermally random, but they will be significantly perturbed from their nominal orbits, particularly at the lower orbital altitudes.

Solar flares cause atmospheric upwelling, so drag dramatically increases during a major solar flare. And the scenario envisioned in the paper is basically a Carrington-level event, so this effect would be extreme.


The current "carefully designed orbits" has a starlink sat doing a collision avoidance manuever every 1.8 minutes on average according to their filing for December 1 to May 31 of this year.


Interestingly, the report from which they draw that number is one of the few that they cite but do not link to. Here's a link:

https://www.scribd.com/document/883045105/SpaceX-Gen1-Gen2-S...

It also notes that the collision odds on which SpaceX triggers such maneuvers is 333 times more conservative than the industry standard. Were that not the case (and they were just using the standard criterion) one might naively assume that they would only be doing a maneuver every ten hours or so. But collision probabilities are not linear, they follow a power law distribution so in actuality they would only be doing such maneuvers every few days.

It is disingenuous to the point of dishonesty to use SpaceX's abundance of caution (or possibly braggadocios operational flex) as evidence that the risk is greater than it actually is.


BQN or its ancestor APL are good for this.


But what about all those LeetCode interviews? /s


My favorite non-mainstream language for competitions like this and Project Euler is Julia. The startup time is not a factor, and the ability to use UTF-8 symbols as variables makes the code more mathematical.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: