Hacker Newsnew | past | comments | ask | show | jobs | submit | thayne's commentslogin

Currently only GHA and Gitlab are supported.

It also make it impossible to publish using CI, which is problematic for projects with frequent releases. And trusted publishing doesn't solve that if you use self-hosted CI.

> trusted publishing doesn't solve that if you use self-hosted CI

Is there any particular reason for the whitelist approach? Standing on the sidelines it appears wholly unnecessary to me. Authentication that an artifact came from a given CI system seems orthogonal to the question of how much trust you place in a given CI system.


Well, given that Github owns NPM, one potential reason could be vendor lock in.

Also, from an implementation standpoint it is probably easier to make a system that just works for a handful of OIDC providers, than a more general solution. In particular, a general solution would require having a UI and maybe an API for registering NPM as a service provider for an identity provider of the package owner's choice.


Is OIDC federation really so involved as to require a nontrivial registration step? Shouldn't this be an OAuth style flow where you initiate with the third party and then confirm the requested permissions with the service? Where did it all go wrong?

Each OIDC provider has its own claim formats, which Trusted Publishing needs to be aware of to accurately determine which set makes up a sufficient "identity" for publishing purposes. That's not easily generalizable across providers, at least not until someone puts the sweat and tears into writing some kind of standard claim profile for OIDC IdPs that provide CI/CD machine identities.

(This is also only half the problem: the Relying Party also needs to be confident that the IdP they're relying on is actually competent, i.e. can be trusted to maintain a private key, operationalize its rotation, etc. That's not something that can easily be automated.)


There needs to be a way for the user to tell NPM whoch IDP to trust. And then, not all IDPs support automated registration.

Javascript is a standard with many implementations. Any addition to the "standard library" (such as it is) has to go through a long process to get approved by a committee, then in turn implemented by at least the major implementations (v8, SpiderMonkey, JavascriptKit).

> Or at least first party libraries maintained by the JavaScript team?

There is no "JavaScript team".


It depends on the state, but that is largely kind of already the case. At least in my state you get a significant deduction to your property taxes if it is your primary residence.

> Ah, looks like it was somewhat superseded by RFC 4918, but we’re not going to tell you which parts! How about those extension RFCs? There’s only 7 of them…

This is a major complaint I have with RFCs.

If you want to know the current standard for a protocol or format you often have to look at multiple RFCs. Some of them partially replace parts of a previous RFC, but it isn't entirely clear which parts. And the old RFCs don't link to the new ones.

There are no less than 11 RFCs for HTTP (including versions 2 and 3)

I really wish IETF published living standards that combined all relevant RFCs together in a single source of truth.


Is this true anymore? AFAIK, I've seen "Updated by" (rfc2119), "Obsoleted by" (rfc3501), but that might changed afterwards https://stackoverflow.com/a/39714048

Those notices don't usually point to all RFCs that update the one you are reading. They tend to be more complete on the case of obsolete ones.


I also suspect those responses were all generated by an AI.

I don't know much about stenography, but that seems like something that would be trivial for a computer to do.

And certainly something that would only have to be done once...


The human should be left in the loop, full stop. Language is hard and court rooms can be chaotic places and you want the transcript to be clean.

(disability accomodations is a different conversation altogether)


> Abstractions don’t remove complexity. They move it to the day you’re on call.

As someone who has been on call a lot, this is only true for bad or incomplete abstractions.

When you are on call (or developing) you can't possibly know everything about the system. You need abstractions to make sense of what is going on, how the system as a whole works, and know which parts to hone in on when things go wrong.

And it is extremely useful to have standard ways of changing configuration for things like timeouts, buffer sizes, etc. in a central place.


I don't think it's meant to be a point against abstraction or a point against complexity. I think it's widely understood that abstraction is part of how advancement is made in our practice, as well as in other disciplines. I have taken this saying to be an observation that there is almost always possible failure beneath the façade provided by the abstraction. Therefore, yes, you avoid having to let that complexity enter your brain, but only when the abstraction is holding. Beyond that point, often after pages are sent, you will still have to engage with the underlying complexity. A proactive measure following from this idea would be to provide support in or alongside your abstractions for situations where one must look under the bonnet.

> Outside of the domain of Firefox/Chromium, screencasting is much seamless

Not always. In my experience Zoom screencasting is much, much worse than on browsers in Wayland. But that isn't terribly surprising given how generally bad Zoom UX is on Linux.


Technically, X is also just a protocol. But there was just one main implementation of the server (X.org), and just a couple implementations of the client library (xlib and xcb).

There isn't any technical reason we couldn't have a single standardized library, at the abstraction level of wlroots.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: