Hacker Newsnew | past | comments | ask | show | jobs | submit | cesarb's commentslogin

> The Cargo example at the top is striking. Whenever I publish a crate, and it blocks me until I write `--allow-dirty`, I am reminded that there is a conflation between Cargo/crates.io and Git that should not exist. I will write `--allow-dirty` because I think these are two separate functionalities that should not be coupled.

That's completely unrelated.

The --allow-dirty flag is to bypass a local safety check which prevents you from accidentally publishing a crate with changes which haven't been committed to your local git repository. It has no relation at all to the use of git for the index of packages.

> Crates.io should not know about or care about my project's Git usage or lack thereof.

There are good reasons to know or care. The first one, is to provide a link from the crates.io page to your canonical version control repository. The second one, is to add a file containing the original commit identifier (commit hash in case of git) which was used to generate the package, to simplify auditing that the contents of the package match what's on the version control repository (to help defend against supply chain attacks). Both are optional.


Those are great points, and reinforce the concept that there is conflation between Cargo and Git/commits. Commits and Cargo IMO should be separate concepts. Cargo should not be checking my Git history prior to publishing.

One of these is not like the others...

> The problem was that go get needed to fetch each dependency’s source code just to read its go.mod file and resolve transitive dependencies.

This article is mixing two separate issues. One is using git as the master database storing the index of packages and their versions. The other is fetching the code of each package through git. They are orthogonal; you can have a package index using git but the packages being zip/tar/etc archives, you can have a package index not using git but each package is cloned from a git repository, you can have both the index and the packages being git repositories, you can have neither using git, you can even not have a package index at all (AFAIK that's the case for Go).


The author seems a little lost tbh, it's starting with "your users should not all clone your database" which I definitely agree with, but that doesn't mean you can't encode your data in a git graph.

It then digresses into implementation details of Github's backend implementation (how is 20k forks relevant?), then complains about default settings of the "standard" git implementation. You don't need to checkout a git working tree to have efficient key value lookups. Without a git working tree you don't need to worry about filesystem directory limits, case sensitivity and path length limits.

I was surprised the author believes the git-equivalent of a database migration is a git history rewrite.

What do you want me to do, invent my own database? Run postgres on a $5 VPS and have everybody accept it as single-point-of-failure?


> Run postgres on a $5 VPS and have everybody accept it as single-point-of-failure

Oh how times have changed. Yes, maybe run two $5 VPSs behind a load balancer for HA so you can patch and then put a CDN in front of it to serve the repository content globally to everyone. Sign the packages cryptographically so you can invite people in your community to become mirrors.

How do people think PyPI, RubyGems, CPAN, Maven Central, or distro Packages work?


I think the article takes issue not with fetching the code, but with fetching the go.mod file that contains index and dependency information. That’s why part of the solution was to host go.mod files separately.

Even with git, it should be possible to grab the single file needed without the rest of the repo, but i'ts still trying to round a square peg.

Honestly I think the article is a bit ahistorical on this one. ‘go get’ pulls the source code into a local cache so it can build it, not just to fetch the go.mod file. If they were having slow CI builds because they didn’t or couldn’t maintain a filesystem cache, that’s annoying, but not really a fault in the design. Anyway, Go improved the design and added an easy way to do faster, local proxies. Not sure what the critique is here. The Go community hit a pain point and the Go team created an elegant solution for it.

I was thinking this too. I think it might be talking about operations like “go mod tidy” or update operations where it updates your go.mod/sum but doesn’t actually build the code. I would guess enterprise tools do a lot of checking whether there are updates without actually doing any building.

> Could dominant vs non-dominant hand for operating things on the center console make a difference?

Large airplanes usually have a pilot on either side of the center console, and they AFAIK take turns operating the airplane, so if it made a difference, I'd expected it to be studied by the aerospace industry. Given that I've never seen it mentioned on any of the airplane incident reports I've read, it probably isn't a big factor, and I see no reason why it would be different for cars.


> Why would any actual software engineer be against slopware? When it inevitably all comes crashing down [...] someone will have to come in to make the actual product.

Why would a window maker be against breaking windows?


> will wait forever, pushed by the wayside by those who can deliver great quality with the help of these new tools.

That sounded a lot like the "have fun staying poor" argument from the peak cryptocurrency days.


Did it? Cryptocurrency enabled gambling and illicit purchases, that's it. In all other ways it was/is a solution in need for a problem.

Current gen AI has a ton of issues, but it nevertheless enables vast amounts of use cases today, right now.

And hoping that slop that is created today will provide work for the artisanal craftsman in the future is wishful thinking at best.


When you think about it, "vibe coding" kinda enables gambling, but with software.

You set up your prompt, CLAUDE.MD, include the source files and let it rip. It gets some things right, some things wrong. You fix some things manually, /clean and go again. Sometimes you gotta throw it out and start over. Feels like "most players stop just before striking it big".


No, not really.

> Any electricity produced by turning generators will require rare earths.

AFAIK, not all kinds of rotating generators require rare earths; IIRC, induction motors don't need any permanent magnets.


AFAIK, modern wind turbines use types of induction motors because it allows them to adjust the rotation speed by applying a counter-rotating stator field (which is a very neat trick) - older turbines had to rotate at a fixed divisor of 3600 rpm (grid frequency).

It makes sense if you think of a prompt not as a way of telling the LLM what to do (like you would with a human), but instead as a way of steering its "autocomplete" output towards a different part of the parameter space. For instance, the presence of the word "mysql" should steer it towards outputs related to MySQL (as seen on its training data); it shouldn't matter much whether it's "mysql" or "MYSQL" or "MySQL", since all these alternatives should cluster together and therefore have a similar effect.

> We live in the future my friends

I second that. Hearing in the VASAviation video (linked by someone else in a nearby thread) the robotic voice announcing what it's doing, while it does a completely autonomous landing in an airport it autonomously decided on, with no possibility of fallback to or help from a human pilot, is one of these moments when we feel like we're living in the future promised by the so many sci-fi stories we've read as children.


That's only if the distro is recent enough; sooner or later, you'll encounter a box running a distro version from before /etc/os-release became the standard, and you'll have to look for the older distro-specific files like /etc/debian_version.

> you'll encounter a box running a distro version from before /etc/os-release became the standard

Do those boxes really still exist? Debian, which isn't really known to be the pinacle of bleeding edge, has had /etc/os-release since Debian 7, released in May 2013. RHEL 7, the oldest Red Hat still in extended support, also has it.


> the oldest Red Hat still in extended support, also has it.

You would be alarmed to know how long the long tail is. Are you going to run into many pre-RHEL 7 boxes? No. Depending on where you are in the industry, are you likely to run into some ancient RHEL boxes, perhaps even actual Red Hat (not Enterprise) Linux? Yeah, it happens.


> Do those boxes really still exist?

Yes, they do. You'll be surprised by how many places use out-of-support operating systems and software (which were well within their support windows when installed, they have just never been upgraded). After all, if it's working, why change it? (We have a saying here in Brazil "em time que está ganhando não se mexe", which can be loosely translated as "don't change a (soccer) team which is winning".)


> The easiest thing would probably to specify the need for "x86-64-v3"

AFAIK, that only specifies the user-space-visible instruction set extensions, not the presence and version of operating-system-level features like APIC or IOMMU.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: