Hacker Newsnew | past | comments | ask | show | jobs | submit | m4rtink's commentslogin

Can we finally decare this (and other incomplete language specific package namanegers) to be a failed experiment and go back to robust and secure distro based package management workflow, with maintainers separate from upstream developpers ?

Its a false belief that distro based package management workflows are, or ever were, more secure. Its the same problem, maybe one step removed. Look at all the exploits with things like libxz

There was also the python 2.7 problem for a long time, thanks to this model, it couldn't be updated quickly and developers, including the OS developers, became dependent on it being there by default, and built things around it.

Then when it EOL'd, it left alot of people exposed to vulnerabilities and was quite the mess to update.


The robust and secure distro based package management workflow that shipped the libxz backdoor to everyone, and broke openssh key generation, and most of the functionality of keepassxc?

>workflow that shipped the libxz backdoor to everyone

Isn't it the case that it didn't ship the backdoor? Precisely because of the thorough testing and vetting process?


No, it shipped in Debian Sid, OpenSUSE Tumbleweed and Fedora Rawhide, along with beta versions of Ubuntu 24.04 and Fedora 40. Arch also shipped it but the code looked for rpm/apt distros so the payload didn’t trigger.

It was caught by a Postgres developer who noticed strange performance on their Debian Sid system, not by anyone involved with the distro packaging process.


In other words, it didn't hit any people running Stable distros, only users on Beta versions or rolling releases.

Sounds like an improvement - having beta builds for people to catch those before they arrive in a stable GNU distribution seems the ideal workflow at glance.


On top of that the number of such issues is tiny compared to language distros.

Distro packaging is not perfect, but it is much, much better.


App devs are part of the distro release process. They verify stability with other packages.

It's OS, it's a collab endeavour


where do you get all these trusted people to review your dependencies from?

it can't be anyone, because you're essentially delegating trust.

no way there's enough trustworthy volunteers (and how do you vet them all?)

and who's going to pay them if they're not volunteers?


Language-specific package managers are a natural outgrowth of wanting portability across platforms.

When distros figure out how I can test my software with a dep at version A and the same dep at version B in a straightforward way, then we can talk.

NPM forcing a human to click a button on release would have solved a lot of this stuff. So would have many other mitigations.


I run them inside a sandbox.

The npm community is too big that one can never discard it for frontend development.


Never in a million years.

Rust's Cargo is sublime. System apt / yum / pacman / brew could never replace it.

Cargo handles so much responsibility outside of system packages that they couldn't even come close to replicating the utility.

Checking language versions, editions, compiling macros and sources, cross-compiling for foreign architectures, linking, handling upgrades, transient dependency versioning, handling conflicts, feature gating, optional compilation, custom linting and strictness, installing sidecars and cli utilities, etc. etc.

Once it's hermetic and namespaced, cargo will be better than apt / yum / etc. They're not really performing the same tasks, but cargo is just so damned good at such important things that it's hard to imagine a better tool.


It's got all the same issues as npm though. The fact that it's so cool makes it a magnet for adding deps. Rust's own docs generator pulls in > 700 deps

A "hoarder" but less than 1000 tabs combined - rookie numbers!

This one stood out to me even more: > I'm used to have 495 tabs open on my iPhone

iOS Safari lets you have 500 tabs max in a "tab group", including the default tab group which is the one that shows "N Tabs" when you open the browser.

I tend to hit this limit every few months and end up saving everything that's currently open into a new tab group with a name like "Old Tabs January 2026".


BTW, if you want to design some models for 3D printing but the only thing you know to do is to code, you can use OpenSCAD & program the obejcrs into existence:

https://openscad.org/

Also recommend using the BOSL2 library with OpenSCAD - it turnes an already very powerful tool into something insane:

https://github.com/BelfrySCAD/BOSL2


Hey, this is super interesting! Thanks for sharing. I have been playing with using the Python console/scripts/macros in FreeCAD to create 3D models. I found this to be very friendly for my programmer mindset. I have learned a bit of onshape, tinkercad, blender and freecad, but I find it extremely tedious and full of unknowns that I struggle to make sense of and resolve (e.g. contraints in freecad, sometimes I just don't know how to add the missing constraints, or just adding text to a curved face in literally all programs, it's never as easy as click the face add text, there are always gotcha's).

I wonder how does openscad compare to FreeCADs python, if you know. I just found https://pythonscad.org/ which looks interesting, but then, the BOSL2 library looks super interesting and important for a good user experience, so I do not know if the PythonSCAD could somehow just import it and use it.

I guess there's homework for me to do here, but if anyone has the experience to get a hint of "what is the best/easiest python-based programming way of doing 3D modeling", I'd be forever thankful for sharing their thoughts.

LLMs are really good at writing Python, so iterating over a model in code I found is really quick, and I really enjoy the process. Meanwhile clicking so many times in so many menus makes me desist on designing anything more-or-less complex.


Just got a 3D printer and was curious what the best practice was for generating objects in code and then outputting to a printer.

Thanks for sharing!


Another, arguably even more powerful, alternative is Rhino + Grasshopper. Grasshopper is often used for generative designs, but can include arbitrary Python nodes and can even be used for "parametrically" designed functional parts.

Grasshopper can also output gcode directly [1], enabling pretty wild things like [2].

[1]: https://interactivetextbooks.tudelft.nl/rhino-grasshopper/Gr...

[2]: https://www.instagram.com/medium_things/


This is really cool, I had no idea this existed. Thanks for sharing!

640 cores should be enough for anyone

Tell that to Nvidia, Blackwell is already up to 752 cores (each with 32-lane SIMD).

640K cores should be enough for everyone.

b200 is 148 sms, so no

Each SM cluster contains 4 independent 32-wide compute units, and GB202 has 192 SMs, although only 188 of them are enabled on the largest shipping SKU. IMO that makes for 752 "cores", but depending on where you draw the line it could be 188, 752, or 24064.

sms is the Nvidia definition of processor, and cuda device properties returns it, not anything else. If you want a marketing number, use cuda cores, it doesn't consistently match to anything in the hardware design.

no, you really can't.

NVidia's use of "cores" is simply wrong. unless you think a core is a simple scalar ALU. but cores haven't been like that for decades.

or would you like to count cores in a current AMD or Intel CPU? each "core" has half a dozen ALUs/FP pipes, and don't forget to multiply by SIMD width.


Anyone still thinks Android has any future under Googles stewardship?

Actually I think Lukasenko only plays dump & wants to wait it out, expecting Belarus to be left standing in a quite good position once Russia goes down the failed state route.

I was surprised to be honest. Belarus is always portrayed as a Russian puppet state, but it seems the puppet master had less power than anticipated.

In 2025+ you also need to count in deniable accident inducing naval drones - both surface and underwater types. So their position would be even more untenable

There are biggest protests in Iran in years & they lost a war with Israel recently - I don't see them being a problem in a long term & with a bit of luck their horrendous regime that regularly slaughters their own citizens might be gone.

Still I don't see an issue - basically you either pay the armed coast gard cutter that stands in your way or you don't go through the straight. If you don't cause any trouble, the other cutter on the other end will pay you back. No money, no transit - unless you really like being boarded.

Regardless of what specific rules could be set you have to consider rules of engagement and potential escalation. What happens if a Russian merchant vessel (either legitimately flagged or shadow fleet) refuses to cooperate? Do you use force to stop them? What if they're being escorted by a Russian warship or combat aircraft?

You put mines and wait. We need to stop with “what if escalation” mantra when it was always Russians escalating.

Put mines where? How do you prevent neutral vessels from hitting them? What happens when they inevitably break loose in a storm and drift away? Naval mines are quite effective for closing down a body of water in an unrestricted hot war but we haven't reached that stage yet. EU and NATO countries still want to be able to use the Baltic Sea and Gulf of Finland for their own purposes.

You haven't really thought this plan through.


Smart mines. There are thousands of them deployed already.

Huh? What are you even talking about? Which models specifically? And how sure are you that they can reliably discriminate between Russian vessels versus others that look and sound identical?

You haven't really thought this plan through.


> And how sure are you that they can reliably discriminate between Russian vessels versus others that look and sound identical?

Based on recent events, even people struggle to tell what is Russian and what isn’t.

These smart mines might solve that?

https://www.nytimes.com/2025/12/31/us/politics/russia-oil-ta...


No, smart mines won't solve that. I can't fathom where this fantasy is coming from. It's totally disconnected from the reality of current mine technology.


What's your point? Those weapons don't have the ability to reliably distinguish Russian shadow fleet vessels from others.

There has been some research (IIRC by ESA) for using the upper atmosphere to feed a ion engine. That way you should be able to put satellites even lower as long as they have enough power from solar panels and are functional.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: