Hacker Newsnew | past | comments | ask | show | jobs | submit | Denvercoder9's commentslogin

> That's why it went beyond web, and why all modern native UI frameworks have a similar model these days.

It's more the other way around, this model started on desktop (eg WPF) and then React popularized it on the web.


> It would be infinitely simpler if one could simply 'cross-compile' down to older symbol versions, but the tooling does not make this easy at all.

It's definitely not easy, but it's possible: using the `.symver` assembly (pseudo-)directive you can specify the version of the symbol you want to link against.


> why would it be that way?

It allows (among other things) the glibc developers to change struct layouts while remaining backwards compatible. E.g. if function f1 takes a struct as argument, and its layout changes between v2 and v3, then glibc_v2_f1 and glibc_v3_f1 have different ABIs.


Some smartphones are locked down by their vendors. There's plenty of options to get full root access on something that's for all intents and purposes a smartphone, especially if you don't particularly care about warranty and/or keeping commerical apps functional.

The radio on all commercially available smartphones are locked down to meet regulatory requirements and runs on an entirely different CPU from the Android OS that you might have root on.

The same's true for the radio on a Raspberry Pi, though.

True but they are commonly used to control other non-consumer (e.g. unregulated) radios via GPIO, and in POCs for threat exploitation demonstrations which are all over YouTube for idiots to mimic... and unlike phones they aren't carried around by almost everyone on a daily basis.

I simply adore how flexibly-placed the goalposts are in this particular game of Calvinball.

Seriously, this thread reads almost as if 5 different people have chimed in with their own individual thoughts

I'd reply very directly and in earnest, but such a reply would violate the guidelines.

Maybe the database upgrade from v(N-17) to v(N-16) simply takes a while, and hasn't completed yet? Or the responsible team is looking at it, but it doesn't warrant the whole company to stop shipping?

Being 17 versions behind is an extreme example, but always having everything run the latest version in the repo is impossible, if only because deployments across nodes aren't perfectly synchronised.


This is why you have active/passive setup and you don't run half-deployed code in production. Using API contracts is a weak solution, because eventually you will write a bug. It's simpler to just say "everything is running the same version" and make that happen.


Blue/green might allow you to do (approximately) atomic deploys for one service, but it doesn't allow you to do an atomic deploy of the clients of that service as well.

Why that? In a very simple case, all services of a monorepo run on a single VM. Spin up new VM, deploy new code, verify, switch routing. Obviously, this doesn't work with humongous systems, but the idea can be expanded upon: make sure that components only communicate with compatible versions of other components. And don't break the database schema in a backward-incompatible way.

So yes, in theory you can always deploys sets of compatible services, but it's not really workable in practice: you either need to deploy the world on every change, or you need to have complicated logic to determine which services are compatible with which deployment sets of other services.

There's a bigger problem though: in practice there's almost always a client that you don't control, and can't switch along with your services, e.g. an old frontend loaded by a user's browser.


The notion of external clients is a smell. If that’s the case, you need a compat layer between that client and your entrypoints, otherwise you’ll have a very hard time evolving anything. In practice, this can include providing frontend assets under previously cached endpoints; a version endpoint that triggers cache busting; a load balancer routing to a legacy version for a grace period… sadly, there‘s no free lunch here.

The only way I could read their answer as being close to correct is if the clients they're referring to are not managed by the deployment.

But (in my mind) even a front end is going to get told it is out of date/unusable and needs to be upgraded when it next attempts to interact with the service, and, in my mind atleast, that means that it will have to upgrade, which isn't "atomic" in the strictest sense of the word, but it's as close as you're going to get.


Squashing only results in a cleaner commit history if you're making a mess of the history on your branches. If you're structuring the commit history on your branches logically, squashing just throws information away.

I’m all ears for a better approach because squashing seems like a good way to preserve only useful information.

My history ends up being: - add feature x - linting - add e2e tests - formatting - additional comments for feature - fix broken test (ci caught this) - update README for new feature - linting

With a squash it can boil down to just “added feature x” with smaller changes inside the description.


If my change is small enough that it can be treated as one logical unit, that will be reviewed, merged and (hopefully not) reverted as one unit, all these followup commits will be amends into the original commit. There's nothing wrong with small changes containing just one commit; even if the work wasn't written or committed at one time.

Where logical commits (also called atomic commits) really shine is when you're making multiple logically distinct changes that depend on each other. E.g. "convert subsystem A to use api Y instead of deprecated api X", "remove now-unused api X", "implement feature B in api Y", "expose feature B in subsystem A". Now they can be reviewed independently, and if feature B turns out to need more work, the first commits can be merged independently (or if that's discovered after it's already merged, the last commits can be reverted independently).

If after creating (or pushing) this sequence of commits, I need to fix linting/formatting/CI, I'll put the fixes in a fixup commit for the appropriate and meld them using a rebase. Takes about 30s to do manually, and can be automated using tools like git-absorb. However, in reality I don't need to do this often: the breakdown of bigger tasks into logical chunks is something I already do, as it helps me to stay focused, and I add tests and run linting/formatting/etc before I commit.

And yes, more or less the same result can be achieved by creating multiple MRs and using squashing; but usually that's a much worse experience.


You can always take advantage of the graph structure itself. With `--first-parent` git log just shows your integration points (top level merge commits, PR merges with `--no-ff`) like `Added feature X`. `--first-parent` applies to blame, bisect, and other commands as well. When you "need" or most want linear history you have `--first-parent` and when you need the details "inside" a previous integration you can still get to them. You can preserve all information and yet focus only on the top-level information by default.

It's just too bad not enough graphical UIs default to `--first-parent` and a drill-down like approach over cluttered "subway graphs".


stacked diffs are the best approach and working at a company that uses them and reading about the "pull request" workflow that everyone else subjects themselves to makes me wonder why everyone is not using stacked diffs instead of repeating this "squash vs. not squash" debate eternally.

every commit is reviewed individually. every commit must have a meaningful message, no "wip fix whatever" nonsense. every commit must pass CI. every commit is pushed to master in order.


Not everyone develops and commits the same way and mandating squashing is a much simpler management task than training up everyone to commit in a similar manner.

Besides, they probably shouldn't make PR commits atomic, but do so as often as needed. It's a good way to avoid losing work. This is in tension with leaving behind clean commits, and squashing resolves it.

The solution there is to make your commit history clean by rebasing it. I often end my day with a “partial changes done” commit and then the next day I’ll rebase it into several commits, or merge some of the changes into earlier commits.

Even if we squash it into main later, it’s helpful for reviewing.


We also do conventional commits: https://www.conventionalcommits.org/

Other than that pretty free how you write commit messages


At work there was only one way to test a feature, and that was to deploy it to our dev environment. The only way to deploy to dev was to check the repo into a branch, and deploy from that branch.

So one branch had 40x "Deploy to Dev" commits. And those got merged straight into the repo.

They added no information.


What you really need is stacked changes, where each commit is reviewed, ran on ci, and merged independently.

No information loss, and every commit is valid on their own, so cherry picks maintain the same level of quality.


Good luck getting 100+ devs to all use the same logical commit style. And if tests fail in CI you get the inevitable "fix tests" commit in the branch, which now spams your main branch more than the meaningful changes. You could rebase the history by hand, but what's the point? You'd have to force push anyway. Squashing is the only practical method of clean history for large orgs.

This - even 5 devs.

Also rebasing is just so fraught with potential errors - every month or two, the devs who were rebasing would screw up some feature branch that they had work on they needed and would look to me to fix it for some reason. Such a time sink for so little benefit.

I eventually banned rebasing, force pushes, and mandated squash merges to main - and we magically stopped having any of these problems.


We squash, but still rebase. For us, this works quite well. As you said, rebasing needs to be done carefully... But the main history does look nice this way.

Why bother with the rebase if you squash anyway? That history just gets destroyed?

Rebase before creating PR, merge after creating PR.

> Good luck getting 100+ devs to all use the same logical commit style

The Linux kernel manages to do it for 1000+ devs.


True but. There's a huge trade-off in time management.

I can spend hours OCDing over my git branch commit history.

-or-

I can spend those hours getting actual work done and squash at the end to clean up the disaster of commits I made along the way so I could easily roll back when needed.


it's also very easy to rewrite commit history in a few seconds.

If I'm rewriting history ... why not just squash?

But also, rewriting history only works if you haven't pushed code and are working as a solo developer.

It doesn't work when the team is working on a feature in a branch and we need to be pushing to run and test deployment via pipelines.


> But also, rewriting history only works if you haven't pushed code and are working as a solo developer.

Weird, works fine in our team. Force with lease allows me to push again and the most common type of branch is per-dev and short lived.


The article is from 2019, things might also simply have changed since then.

That's true for local hooks, but neither a dishonest person nor an LLM can bypass a pre-receive hook on the server (as long as they don't have admin access).

Thanks, apparently most people here aren't familiar with server-side hooks.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: