It doesn’t have to be like X11. Presumably, it’d be something you could disable if you’d like.
It’d be very handy if we had a performant remote desktop option for Linux. I could resume desktop sessions on my workstation from my laptop and I could pair program with remote colleagues more effectively.
In the past I’d boot into Windows and then boot my Linux system as a raw disk VM just so I could use Windows’s Remote Desktop. Combined with VMware Workstation’s support for multiple monitors, I had a surprisingly smooth remote session. But, it was a lot of ceremony.
When the news hit that the entry model was being retired I thought it might not be so bad because there’s probably a deluge of used models available, either because the owner doesn’t really use it or because they upgraded to the OLED. I was astounded by how many blatant scam postings there are on Facebook Marketplace. I can’t imagine that Meta can’t detect these since every post title ends in some random four character alphanumeric string. I’m concerned now that we’re going to see an uptick in people being scammed because they want a Steam Deck but can’t afford the OLED models.
I don’t have any data to support this, but I suspect a sizable segment of PC gamers aren’t going to view this as the impetus they needed to splurge on the OLED. I doubt very many of those people see this as a double-edged sword. It doesn’t particularly matter to them what Valve’s confidence in the product is if they can’t afford to buy one. While some may buy the step-up model, many won’t. Valve loses out on the sale of the hardware and on the sale of the software to run on it. And I’d be concerned that ceding the lower end of the market is going to poison the well like video game consoles in the early 80s.
With that said, Valve almost certainly has the data and would know better than me. It seems like a gamble to me. Maybe the post is correct and this is all about price anchoring for the new Steam Machine and Frame.
If you’re just committing for your own sake, that workflow sounds productive. I’ve been asked to review PRs with 20+ commits with a “wip” or “.” commit message with the argument: “it’ll be squash merged, so who cares!”. I’m sure that works well for the author, but it’s not great for the reviewer. Breaking change sets up into smaller logical chunks really helps with comprehension. I’m not generally a fan of people being cavalier with my time so they can save their own.
For my part, I find the “local history” feature of the JetBrains IDEs gives me automatic checkpoints I can roll back to without needing to involve git. On my Linux machines I layer in ZFS snapshots (Time Machine probably works just as well for Macs). This gives me the confidence to work throughout the day without needing to compulsively commit. These have the added advantage of tracking files I haven’t yet added to the git repo.
There are two halves here. Up until the PR is open, the author should feel free to have 20+ "wip" commits. Or in my case "checkpoint". However, it is also up to the author to curate their commits before pushing it and opening the PR.
So when I open a Pr, I'll have a branch with a gajillion useless commits, and then curate them down to a logical set of commits with appropriate commit messages. Usually this is a single commit, but if I want to highlight some specific pieces as being separable for a reviewer, it'll be multiple commits.
The key point here is that none of those commits exist until just before I make my final push prior to a PR.
I clean up commits locally as well. But, I really only commit when I think I have something working and then collapse any lint or code formatting commits from there. Sometimes I need to check another branch and am too lazy to set up worktrees, so I may create a checkpoint commit and name it a way that reminds me to do a `git reset HEAD^` and resume working from there.
But, if you're really worried about losing 15 minutes of work, I think we have better tools at our disposal, including those that will clean up after themselves over time. Now that I've been using ZFS with automatic snapshots, I feel hamstrung working on any Linux system just using ext4 without LVM. I'm aware this isn't a common setup, but I wish it were. It's amazing how liberating it is to edit code, update a config file, install a new package, etc. are when you know you can roll back the entire system with one simple command (or, restore a single file if you need that granularity). And it works for files you haven't yet added to the git repo.
I guess my point is: I think we have better tools than git for automatic backups and I believe there's a lot of opportunity in developer tooling to help guard against common failure scenarios.
I don't commit as a backup. I commit for other reasons.
Most common is I'm switching branches. Example use case: I'm working locally, and a colleague has a PR open. I like to check out their branch when reviewing as then I can interact with their code in my IDE, try running it in ways they may not have thought of, etc.
Another common reason I switch branches is that sometimes I want to try my code on another machine. Maybe I'm changing laptops. Maybe I want to try the code on a different machine for some reason. Whatever. So I'll push a WIP branch with no intention of it passing any sort of CI/CD just so I can check it out on the other machine.
The throughline here is that these are moments where the current state of my branch is in no shape, way, or form intended as an actual valid state. It just whatever state my code happened to be in before I need to save it.
I think you might appreciate https://www.jj-vcs.dev, which makes it a lot easier to split and recombine changes. I often use it for checkpoints, although you wouldn't see that from looking at what I push :).
One nifty feature is that commits don't need messages, and also it'll refuse (by default) to push commits with no message. So your checkpoint commits are really easy to create, and even easier to avoid pushing by mistake.
Why do you care about the history of a branch? Just look at the diff. Caring about the history of a branch is weird, I think your approach is just not compatible with how people work.
A well laid out history of logical changes makes reviewing complicated change sets easier. Rather than one giant wall of changes, you see a series of independent, self contained, changes that can be reviewed on their own.
Having 25 meaningless “wip” commits does not help with that. It’s fine when something is indeed a work in progress. But once it’s ready for review it should be presented as a series of cleaned up changes.
If it is indeed one giant ball of mud, then it should be presented as such. But more often than not, that just shows a lack of discipline on the part of the creator. Variable renames, whitespace changes, and other cosmetic things can be skipped over to focus on the meat of the PR.
From my own experience, people who work in open source and have been on the review side of large PRs understand this the best.
Really the goal is to make things as easy as possible for the reviewer. The simpler the reviews process, the less reviewer time you’re wasting.
> A well laid out history of logical changes makes reviewing complicated change sets easier.
I've been on a maintenance team for years and it's also been a massive help here, in our svn repos where squashing isn't possible. Those intermediate commits with good messages are the only context you get years down the line when the original developers are gone or don't remember reasons for something, and have been a massive help so many times.
I'm fine with manual squashing to clean up those WIP commits, but a blind squash-merge should never be done. It throws away too much for no good reason.
For one quick example, code linting/formatting should always be a separate commit. A couple times I've seen those introduce bugs, and since it wasn't squashed it was trivial to see what should have happened.
I agree, in a job where you have no documentation and no CI, and are working on something almost as old or older than you with ancient abandoned tools like svn that stopped being relevant 20 years ago, and in a fundamentally dysfunctional company/organization that hasn't bothered to move off of dead/dying tools in the last 20 years, then you just desperately grab at anything you can possibly find to try to avoid breaking things. But there are far better solutions to all of the problems you are mentioning than trying to make people create little mini feature commits on their way to a feature.
It is not possible to manually document everything down to individual lines of code. You'll drive yourself crazy trying to do so (and good luck getting anyone to look at that massive mess), and that's not even counting how documentation easily falls out of date. Meanwhile, we have "git blame" designed to do exactly that with almost no effort - just make good commit messages while the context is in your head.
CI also doesn't necessarily help here - you have to have tests for all possible edge cases committed from day one for it to prevent these situations. It may be a month or a year or several years later before you hit one of the weird cases no one thought about.
Calling svn part of the problem is also kind of backwards - it has no bearing on the code quality itself, but I brought it up because it was otherwise forcing good practice because it doesn't allow you to erase context that may be useful later.
Over the time I've been here we've migrated from Bugzilla to Fogbugz to Jira, from an internal wiki to ReadTheDocs to Confluence, and some of these hundreds of repos we manage started in cvs, not svn, and are now slowly being migrated to git. Guess what? The cvs->svn->git migrations are the only ones that didn't lose any data. None of the Bugzilla cases still exist and only a very small number were migrated from FogBugz to Jira. Some of the internal wiki was migrated directly to Confluence (and lost all formatting and internal links in the process), but ReadTheDocs are all gone. Commit messages are really the only thing you can actually rely on.
> Calling svn part of the problem is also kind of backwards - it has no bearing on the code quality itself
Lets just be Bayesian for a minute. If an organization can't figure out how to get off of svn, which is a dead and dying technology within 15-20 years of it being basically dead in most of tech then probably it's not not going to be nimble in other ways. Probably it's full of people who don't really do any work.
> Some of the internal wiki was migrated directly to Confluence (and lost all formatting and internal links in the process)
Dude this is what I mean. How did someone manage to mess this up? It's not exactly rocket science to script something to suck out of one wiki and shove into another one. But lets say it's hard to do (it's not). Did they just not even bother to look at what they did? They just figured "meh" and declared victory and then three were no consequences, nobody bothered to go back and redo it or fix it? Moving stuff between wiki's is an intern-skill-level task. This is another example that screams that the people at your work don't do their jobs and don't care about their work, and that this is tolerated or more likely not even noticed. Do you work for the government?
> Commit messages are really the only thing you can actually rely on.
I suspect you are exaggerating how reliable your commit messages are, considering.
> A well laid out history of logical changes makes reviewing complicated change sets easier. Rather than one giant wall of changes, you see a series of independent, self contained, changes that can be reviewed on their own.
But this would require hand curation? No development proceeds that way, or if it does then I would question whether the person is spending 80% of their day curating PRs unnecessarily.
I think you must be kind of senior and you can get away with just insisting that other people be less efficient and work in a weird way so you can feel more comfortable?
> But this would require hand curation? No development proceeds that way, or if it does then I would question whether the person is spending 80% of their day curating PRs unnecessarily.
If you’re working on something and a piece of it is clearly self contained, you commit it and move on.
> I think you must be kind of senior and you can get away with just insisting that other people be less efficient and work in a weird way so you can feel more comfortable?
You can work however you like. But when it’s time to ask someone else to review your work, the onus is on you to clean it up to simplify review. Otherwise you’re saying your time is more valuable than the reviewer’s.
> But this would require hand curation? No development proceeds that way, or if it does then I would question whether the person is spending 80% of their day curating PRs unnecessarily.
It's not really hand curation if you're deliberate about it from the get-go. It's certainly not eating up 80% of anyone's time.
Structuring code and writing useful commits a skill to develop, just like writing meaningful tests. As a first step, use `git add -p` instead of `git add .` or `git commit -a`. As an analog, many junior devs will just test everything, even stuff that doesn't make a lot of sense, and then jumble them all together. It takes practice to learn how to better structure that stuff and it isn't done by writing a ton of tests and then curating them after the fact.
> I think you must be kind of senior and you can get away with just insisting that other people be less efficient and work in a weird way so you can feel more comfortable?
Your personal productivity should only be one consideration. The long-term health of the project (i.e., maintenance) and the impact on other people's efficiency also must be considered. And efficiency isn't limited to how quickly features ship. Someone who ships fast but makes it much harder to debug issues isn't a top performer. At least, in my experience. I'd imagine it's team, company, and segment-dependent. For OSS projects with many part-time contributors, that history becomes really important because you may not have the future ability to ask someone why they did something a particular way.
Aha, I see the issue here. What you seem to organize into cute little self contained 'commit's I would put on individual 'branches'.
It is too hard for you to get someone to look at a PR, so you are packing multiple 'related' but not interdependent changes into one PR as individual commits so you can minimize the number of times you have to get someone to hit "approve", which is the limiting resource.
In your situation then I believe your way of working is a rational adaptation, but only so far as you lack the influence to address the underlying organizational/behavioral dysfunction. We agree on the underlying need to make good messages, but where I merge 4-5 small branches per day, each squashed to one commit, you are saving them all up to get them (unnecessarily) put into a single merge commit.
Just as "Structuring code" is a skill to develop, so is building healthy organizations.
Repeatedly, you've been dismissive and insulting. It's not conducive to productive conversation. Your characterization of what I do or how I work is wrong. You latched on to some small part you thought would let you "win" and ran with it. If you actually care, I do a lot of open source work so you can find exactly how I work. Naturally, you can't see what I do in private, but I assure you it's not significantly different.
I aim to ship reasonably complete functionality. The "V" in "MVP" means it needs to be viable, not just minimal. Shipping some part that doesn't work standalone isn't useful to anyone. Yes, the PR is smaller, but now the context for that work is split over multiple PRs, which may not be reviewed by the same people. No one really has the full picture beyond me, which I guess is a good way to get my PRs rapidly approved, but a terrible way to get feedback on the overall design.
I don't work with you so I don't particularly care how you work. Again, I was offering up other solutions than running "git commit" every 15 minutes. If you want to manually simulate filesystem snapshots, that's your prerogative. But, you're incorrect that any model other than the one you employ is niche an not how software is written. Elsewhere you dismissed the examples of large, open source projects as being unique. But, you'll find substantially smaller ones also employ a model closer to what I've described.
On the contrary, it seems to me that it is your approach which is incompatible with others. I'm not the same person you were replying to but I want the history of a branch to be coherent, not a hot mess of meaningless commits. I do my best to maintain my branches such that they can be merged without squashing, that way it reflects the actual history of how the code was written.
It's how code is written in Google (including their open-source products like AOSP and Chromium), the ffmpeg project, the Linux Kernel, Git, Docker, the Go compiler, Kubernetes, Bitcoin, etc, and it's how things are done at my workplace.
I'm surprised by how confident you are that things simply aren't done this way considering the number of high-profile users of workflows where the commit history is expected to tell a story of how the software evolved over time.
"It's how code is written" then you list like the 6 highest profile, highest investment premier software projects on Earth like that's just normal.
I'm surprised by how confident you are when you can only name projects you've never worked on. I wanted to find a commit of yours to prove my point, but I can't find a line of code you've written.
Presumably, a branch is a logical segment of work. Otherwise, just push directly master/trunk/HEAD. It's what people did for a long time with CVS and arguably worked to some extent. Using merge commits is pretty common and, as such, that branch will get merged into the trunk. Being able to understand that branch in isolation is something I've found helpful in understanding the software as a whole.
> Caring about the history of a branch is weird, I think your approach is just not compatible with how people work.
I get you disagree with me, but you could be less dismissive about it. Work however you want -- I'm certainly not stopping you. I just don't your productivity to come at the expense of mine. And, I offered up other potential (and IMHO, superior) solutions from both developer and system tools.
I suppose what type of project you're working on matters. The "treat git like a versioned zip file" using squashed merges works reasonably well for SaaS applications because you rarely need to roll anything back. However, I've found a logically structured history has been indispensable when working on long-lived projects, particularly in open source. It's how I'm able to dig into a 25 year old OSS tool and be reasonably productive with.
To the point I think you're making: sure, I care what changed, and I can do that with `diff`. But, more often if I'm looking at SCM history I'm trying to learn why a change was made. Some of that can be inferred by seeing what other changes were made at the same time. That context can be explicitly provided with commit messages that explain why a change was made.
Calling it incompatible with how people work is a pretty bold claim, given the practice of squash merging loads of mini commits is a pretty recent development. Maybe that's how your team works and if it works for you, great. But, having logically separate commits isn't some niche development practice. Optimizing for writes could be useful for a startup. A lot of real world software requires being easy to maintain and a good SCM history shines there.
All of that is rather orthogonal to the point I was trying to add to the discussion. We have better tools at our disposal than running `git commit` every 15 minutes.
Having immutable objects by default isn’t incredibly commonplace outside of functional languages. It certainly isn’t unique to Ruby and seems out of place in a discussion comparing Ruby to Python. Fortunately, you can defensively freeze any objects you’re passing around to avoid the most common issues with mutable objects.
Immutable strings is a more popular programming language feature and Ruby has a mechanism for opting into that. It’s so commonplace that the complaint usually isn’t that a string can be modified, but rather that every source file includes a magic comment to prevent that. Besides data safety, the VM can optimize frozen strings, so popular linters will flip that setting on for you. String mutability isn’t a practical issue for modern codebases. And, as language design goes, it’s kinda nice not needing to use a parallel set of classes for mutable and immutable string data IMHO.
With that said, the magic comment is a wart and folks are looking at making immutable strings the default. But, there’s a strong desire to avoid breaking the world. The Ruby Core team is keen to keep the lessons learned from the Python 2 -> 3 migration in mind.
Not only a wart but a massive foot gun, not shared by any other language, as I said. It is incredibly common to create hash tables using strings as keys, and Ruby makes this dangerous by default.
> Having immutable objects by default isn’t incredibly commonplace outside of functional languages
They make it easy to create immutable objects. Python has tuples and immutable data classes. Strings are immutable.
> The Ruby Core team is keen to keep the lessons learned from the Python 2 -> 3 migration in mind.
In the meantime, this ridiculous foot gun that no other language shares exists. That is a fact, and the fix being hard does not make it any less of a fact.
Plaid still rubs me the wrong way. Not selling to 3rd parties is great. But, everyone uses it, so that's still a lot of people getting data I don't necessarily want them to have. If I want to link a bank account to a credit card account in order to pay my bill, there's zero reason for that credit card company to have access to my bank transaction data. I still do the ACH deposit verification method where I can in order to avoid Plaid. I'd love more granular controls here or an audit log of what was pulled in.
SimpleFIN¹ looks compelling. Actual Budget can use that and it seems to work more like a privacy-oriented Plaid. But, now you need to trust a much smaller player. Really, I wish this were all standardized with strict privacy requirements.
Actual Budget uses SimpleFIN [1] in the US. The integration is pretty good. The big alternative is Plaid and I don't trust them at all. It's a shame we don't have a standard for electronic banking yet.
If all you're looking to do is produce a design the quickest way possible, then sure Fusion often wins. Just as there was a time where buying Maya made more sense than using Blender. But, FreeCAD offers other niceties, like being able to work offline, using an open file format, performant non-web UI, generally avoiding vendor lock-in. And Autodesk already did a major rug pull with Fusion360 licensing once.
I mostly design functional 3D prints. I've found FreeCAD 1.0 fixed most of the annoyances I ran into and I'm pretty productive with it. But, I didn't come into it with an expectation of a SolidWorks or Fusion clone. I learned the tool with its own idioms and it seems pretty straightforward to me. It's not perfect by any means and I've run into the occasional bug. To that end, I've found reporting bugs with reproducible steps goes a long way to getting things fixed.
I'm not sure what it is about CAD in particular, but I find everyone wants the "Blender of the CAD world" while skipping over the decade of investment it took to get Blender where it is. For a long time, discussions about Blender were dominated by complaints about the UX. If we didn't have folks willing to work past a hit to productivity in order to make an investment into Blender, we wouldn't have the amazing open source tool we have today. FreeCAD has all the expectations of a high quality open source CAD tool with hardly any of the investment. Just getting people on /r/freecad to file issues is surprisingly challenging.
By all means, if you're happy with Fusion and don't mind the licensing, have at it. I'm sure there's functionality in there without an equivalent in FreeCAD. I'd personally rather not have my designs locked up in Fusion and see FreeCAD as the best option for me, even if it suffers from the challenges of open source UI design.
Have you tried since the 1.0 release? There were quite a few improvements that were locked behind weekly builds for a long time. AstoCAD[1] might be another option for you. It's basically FreeCAD with a streamlined UI.
Maybe it's a local bump, but it sure seems like SQLite has become a fair more popular topic in the Rails world. I wouldn't expect to find it in a HN search tool. SQLite has gone from the little database you might use to boostrap or simplify local development to something products are shipping with in production. Functionality like solid_cable, solid_cache, and solid_queue allow SQLite to be used in more areas of Rails applications and is pitched as a way to simplify the stack.
While I don't have stats about every conference talk for the last decade, my experience has been that SQLite has been featured more in Rails conference talks. There's a new book titled "SQLite on Rails: The Workbook" that I don't think would have had an audience five years ago. And I've noticed more blog posts and more discussion in Rails-related discussion platforms. Moreover, I expect we'll see SQLite gain even more in popularity as it simplifies multi-agent development with multiple git worktrees.
Having used CI systems and application frameworks that support YAML anchors for configuration, adding in a programming language would be a massive amount of complexity for very little gain. We're not talking about dozens of locations with hundreds of lines of shared code.
Asking the team to add a new build dependency, learn a new language, and add a new build step would create considerably more problems, not fewer. Used sparingly and as needed, YAML anchors are quite easy to read. A good editor will even allow you to jump to the source definition just as it would any other variable.
Being self-contained without any additional dependencies is a huge advantage, particularly for open source projects, IMHO. I'd wager very few people are going to learn Dhall in order to fix an issue with an open source project's CI.
Your team doesn't know YAML, it knows github actions. There's zero transferable knowledge when switching from github actions to kubernetes deployments, as there is precisely the same zero correlation between kubernetes and ansible configs. 'It's all YAML' is a lie and I'm continuously surprised so many people are falling for it for so long. YAML is the code-as-data, but the interpreter determines what it all means.
Oh, for goodness' sake. We know YAML syntax and that's the only part that's relevant here. Pointing out that different software uses different keys for their configuration or even takes different actions for keys that happen to share the same name isn't particularly insightful. We haven't been bamboozled.
I don’t understand even more now. If you freely admit they’re different languages, the only reason to keep using the stupid deficient syntax is momentum, and while it isn’t a bad reason, it is costing you and everyone else in the long run.
Huh? I'm using YAML because that's the language used to configure GitHub Actions. You may not like YAML, and that's fine. But if we collectively had to learn the unique way each project generates their GitHub Actions config, that would be a massive waste of time.
YAML isn't that hard. Most GitHub Actions configs I see are well under 500 lines; they're not crumbling under the weight of complexity.
Assembly isn't hard either and yet almost nobody is writing it anymore, for a reason, just as nobody (to an epsilon) is writing jvm opcodes directly. Somehow the industry decided assembly is actually fine when doing workflows.
I'm saying GHA should use a proper programming language instead of assembly.
Use the language you are already working in? Most languages have good YAML serialization and I think in most languages a function call taking a couple parameters that vary to produce slightly different but related objects is going to be as readable or more readable than YAML anchors.
That would be better, but it's an option I already have available to me and it's just not attractive. AFAIK, GitHub Actions requires the config files to be committed. So, now I need to guard against someone making local modifications to a generated file. It's doable of course, but by the time I've set all this up, it would have been much easier for everyone to copy and paste the six lines of code in the three places they're needed. YAML anchors solve that problem without really creating any new ones.
If generating your GitHub Actions config from a programming language works for you, fantastic. I'm just happy we now have another (IMHO, attractive) option.
Most of the debate here is that a lot of us don't find YAML anchors attractive. It can be one of the papercuts of using YAML.
I mostly agree with the article that with GitHub Actions specifically, I try to refactor things to the top-level "workflow" level first, and then yeah resort to copy and paste in most other cases.
I'm a little less adamant that GitHub should remove anchor support again than the original poster, but I do sympathize greatly, having had to debug some CircleCI YAML and Helm charts making heavy use of YAML anchors. CircleCI's YAML is so bad I have explored options to build it with a build process. Yeah, it does create new problems and none of those explorations got far enough to really improve the process, but one of the pushes to explore them was certainly that YAML anchors are a mess to debug, especially when you've got some other tool concatenating YAML files together and can result in anchor conflicts (and also other parts of the same YAML that depend on a particular form of how anchor conflicts overwrite each other, oof). I don't see GitHub Actions necessarily getting that bad just by enabling anchors, but I have seen enough of where anchors become a crutch and a problem.
That's fair. And I'm not arguing that YAML anchors can never be a problem. I am saying that layering in a whole custom build system to handle a 250 line ci.yml file is not the trade-off I'd make. What I'd hazard to say most teams do in that situation is duplicate config, which is not without its own problems. I think YAML anchors is a fine solution for these cases and don't think they'll lead to total chaos. Alas, not all config options can be hoisted to a higher level and I'm trusting a team has explored that option when it's available.
If you're dealing with 10s of files that are 1000s of lines long, then YAML anchors may very well not be the ideal option. Having the choice lets each team find what works best for them.
Ouch. That sounds terrible with or without YAML anchors. GitHub Actions has overall been a great addition, allowing projects to integrate CI directly into their PR process. But, I never understood why it didn't have simpler paths for the very common use cases of CI and CD. Virtually any other dedicated CI product is considerably easier to bootstrap.
It’d be very handy if we had a performant remote desktop option for Linux. I could resume desktop sessions on my workstation from my laptop and I could pair program with remote colleagues more effectively.
In the past I’d boot into Windows and then boot my Linux system as a raw disk VM just so I could use Windows’s Remote Desktop. Combined with VMware Workstation’s support for multiple monitors, I had a surprisingly smooth remote session. But, it was a lot of ceremony.
reply