Hacker Newsnew | past | comments | ask | show | jobs | submit | jtwaleson's commentslogin

This is super cool. Love the little icons in the left and would be nice if they were clickable.

Very cool but I think my wireless bluetooth one is even cooler ;)

https://blog.waleson.com/2024/10/bakelite-to-future-1950s-ro...

It actually supports using the rotary dial to call phone numbers on your smartphone.


I think my full-blown mobile rotary phone is EVEN cooler:

https://www.stavros.io/posts/irotary-saga/

It actually makes calls itself and has a SIM.


ok you win

Someone should make a Bluetooth box that outputs to a standard phone connector so any standard phone would work. Has anyone done that?

Someone does - the Cell2Jack is only $36 USD:

https://www.cell2jack.com/

I use one with a 1970's vintage rotary desk phone and it works well.


I think I've seen it, I was thinking of making a USB one. I might!

Would love to see an update to 2025


Nix is 11st, and Rust is 13rd, and C is 9th. Interesting!

I really, really want this updated too and saw it in my bookmarks. Figured the historic data was interesting, and that someone might want to give this another go.

+1. This has historical value but 11 years are eons in IT.

Agreed, but: I know a couple of players in the "Enterprise Low-Code" space, who have invested heavily in deeply integrated development environments (with a capital I) and the right abstractions. They are all struggling with AI adoption as their systems "don't speak text". LLMs are great at grokking text based programming but not much else.

To me, enterprise low code feels like the latest iteration of the impetus that birthed COBOL, the idea that we need to build tools for these business people because the high octane stuff is too confusing for them. But they are going the wrong way about it; we shouldn't kiddie proof our dev tools to make them understandable to mere mortals, but instead we should make our dev tools understandable enough so that devs don't have to be geniuses to use them. Given the right tools I've seen middle schoolers code sophisticated distributed algorithms that grad students struggle with, so I'm very skeptical that this dilemma isn't self-imposed.

The thing about LLMs being only good with text is it's a self-fulfilling prophecy. We started writing text in a buffer because it was all we could do. Then we built tools to make that easier so all the tooling was text based. Then we produced a mountain of text-based code. Then we trained the AI on the text because that's what we had enough of to make it work, so of course that's what it's good at. Generative AI also seems to be good at art, because we have enough of that lying around to train on as well.

This is a repeat of what Seymour Papert realized when computers were introduced to classrooms around the 80s: instead of using the full interactive and multimodal capabilities of computers to teach in dynamic ways, teachers were using them just as "digital chalkboards" to teach the same topics in the same ways they had before. Why? Because that's what all the lessons were optimized for, because chalkboards were the tool that was there, because a desk, a ruler, paper, and pencil were all students had. So the lessons focused around what students could express on paper and what teachers could express on a chalk board (mostly times tables and 2d geometry).

And that's what I mean by "investment", because it's going to take a lot more than a VC writing a check to explore that design space. You've really gotta uproot the entire tree and plant a new one if you want to see what would have grown if we weren't just limited to text buffers from the start. The best we can get is "enterprise low code" because every effort has to come with an expected ROI in 18 months, so the best story anyone can sell to convince people to open their wallets is "these corpos will probably buy our thing".


As someone that recently started to look into that space, that problem seems to be being tackled via agents and MCP tooling, meaning Fusion, Workato, Boomi, and similar.

This is really useful. Might want to add a checkbox at a certain threshold, so that reviewers explicitly answer the concerns of the LLM. Also you can start collecting stats on how "easy to review" PR's of team members are, e.g. they'd probably get a better score if they address the concerns in the comments already.


I've seen worse ideas ;)


As far as I know there is currently no international alternative authority for this. So definitely not ideal, but better than not having the warnings.


Yes but that's not a legal argument.

You're honor, we hurt the plaintiff because it's better than nothing!


True, and agreed that lawsuits are likely. Disagree that it's short-sighted. The legal system hasn't caught up with internet technology and global platforms. Until it does, I think browsers are right to implement this despite legal issues they might face.


In what country hasn't the legal system caught up?

The point I raise is that the internet is international. There are N legal systems that are going to deal with this. And in 99% of them this isn't going to end well for Google if plaintiff can show there are damages to a reasonable degree.

It's bonkers in terms of risk management.

If you want to make this a workable system you have to make it very clear this isn't necessarily dangerous at all, or criminal. And that a third party list was used, in part, to flag it. And even then you're impeding visitors to a website with warnings without any evidence that there is in fact something wrong.

If this happens to a political party hosting blogs, it's hunting season.


I meant that there is no global authority for saying which websites are OK and which ones are not. So not really that the legal system in specific countries have not caught up.

Lacking a global authority, Google is right to implement a filter themselves. Most people are really really dumb online and if not as clearly "DO NOT ENTER" as now, I don't think the warnings will work. I agree that from a legal standpoint it's super dangerous. Content moderation (which is basically what this is) is an insanely difficult problem for any platform.


The alternative is to not do this.


It's slides all the way down. Once models support this natively, it's a major threat to slides ai / gamma and the careers of product managers.


I have Luddite feelings reading about alternatives to Git.

As an industry we have soooo many languages, frameworks, tools, distros etc. It's like we are pre metric system or standardization on screw thread sizing.

I am really happy that at least for VCS, we have a nearly universal solution in Git, except for the big tech folks.

Sure, jj might solve some issues, but once it gets serious traction, all the tooling that works with e.g. repo analysis will need to start supporting git and jj. More docs need to be created, junior developers will need to learn both systems (as git is not going anywhere).

Given all the downstream effects, I do not think introducing another VCS is a net positive.


In my case, the shoddiness and thoughtlessness of Git's user interface pisses me off so much that I just want it to be replaced. A good tool similar to Git may even explain Git's concepts better than Git or its documentation that likes to discuss "some tree-ish refs".


I do see that, and git is not perfect, not by a long shot. I'd much rather see an evolution of git itself rather than getting more alternatives.


That essentially is what jj is though, given it can be used on git repositories transparently.


I started using git when my employer still used svn. This was possible because `git-svn` was so good and it seamlessly allowed me to use branches to do development while still committing to svn's trunk. I think jj is trying to do something similar: jj-unaware tools work reasonably well in colocated jj repositories, and jj-unaware peers (CI tools, etc) work exactly as they did before.

I do agree that you can't really use jj without also knowing "a fair amount" about git, but notably you never need to use the git cli to be an effective contributor to a github project, which is basically the same as `git-svn` was back before git got popular.


    More docs need to be created, junior developers will need to learn both systems (as git is not going anywhere).
Not true. Using jj is a choice made by a single developer. Nothing proprietary escapes from a jj clone of a git repo. Junior devs can use vanilla git if they really want to.

    All the tooling that works with e.g. repo analysis will need to start supporting git and jj
Also not true, unless I misunderstand what you mean by repo analysis. Colocated jj repos are more or less supersets of git repos.


I see what you mean, and having looked a bit more deeply today, it's clear that jj is very compatible.

However, my point is that having two ways of doing this within a team is already confusing. What if one person writes a "this is how we work" document on the wiki and mentions some git, and the next person rewrites it to jj? It's extra things to think about. It's like supporting developers from a team with both Windows and Linux (Debian, Arch, Ubuntu, etc). Teams do it, and it's all possible, but it was nice that at least everyone used git.


If a team has members who unilaterally rewrite best practice docs to include their preferred technologies, the fault is not with the tech.


I didn't pick a great example and didn't explain it eloquently, but I hope you understand my point. Supporting multiple ways of working takes effort.


We had rcs, cvs, svn, and now git. Why would it be the ultimate VCS and not be replaced again by something better?


If a product is 10x better than what's currently available, it will see rapid adoption. There was obviously something about git that made it MUCH better than the precursors and that's why it obliterated everything else.

I highly doubt that new tools will be 10x better than git. Maybe 20%?


One way I compare the git to jj transition (if it happens, or for whom it happens) to the svn to git transition is: branching in svn was awful. It was heavyweight and you were signing up for pain later down the road. Git made branching easy and normal, almost something you barely need to think about. jj does a similar thing for rebasing. For someone whose familiarity with git is clone, pull, push, merge, creating branches (so, basic/working/practical familiarity but even "rebase -i" might be pushing the limits)- for someone like that what jj offers is a similar "lift" of a feature (rebase) from "scary" to "normal" similar to what git did for branching compared to svn.

That's just one aspect of the whole thing, and of course if you're a git rebase wizard (or have tools that make you that) then this won't seem relevant. But I think for a lot of people this might be a salient point.


You should really try it; it's clear you're learning some things about it just from this thread.

Will a product that is 10x better see rapid adoption if people who have not used it still choose to criticize it in the abstract?


I'm happy to give it a try when I have some time, but I have 0 problems with git right now so it's not top of my list. My critique is also really not towards jj specifically, I'm just discussing the idea that git has extremely wide adoption now and that this has benefits :)


Preforce, ClearCase, SourceSafe, TFS, Mercurial....


Perforce


Git absolutely is a productivity drain and should be replaced, particularly as agentic coding takes over, as its footgun elements get magnified when you have a lot of agents working on one codebase at once. I dislike jj as move forward because I don't think it goes far enough for the amount of friction moving to it as an industry would entail.

The next generation of VCS should be atomic, with a proper database tracking atoms, and "plans" to construct repo states from atoms. A VCS built around these principles would eliminate branching issues (no branches, just atoms + plans), you could construct relationships from plan edit distances and timestamps without forcing developers to screw with a graph. This would also allow macros to run on plans and transform atoms, enable cleaner "diffs" and make it easy to swap in and out functionality from the atom database instead of having to hunt through the commit graph and create a patch.

The downside of an atomic design like this is you have to parse everything that goes into VCS to get benefits, but you can fallback to line based parsing for text files, and you can store pointers to blobs that aren't parseable. I think the tradeoff in terms of DX and features is worth it but getting people off git is going to be an epic lift.


> Git absolutely is a productivity drain and should be replaced, particularly as agentic coding takes over,

Is this an oblique way of saying that Git should not be replaced?


No, but the lift of replacing git is huge, so we shouldn't do it for a Python2->Python3 sitaution, we should have a replacement that really brings big wins.


I have no idea what problem this is supposed to solve. Where is the V in VCS here? How do you track the provenance/history of changes?

You may not be "forcing" developers to "screw with a graph" (what?) but you are forcing them to screw with macros (we're adding a built-in scripting layer to the VCS?) and these unfamiliar new concepts called atoms and plans.

> A VCS built around these principles would eliminate branching issues (no branches, just atoms + plans)

And it would introduce zero new confusing issues of its own?

> This would also [...] make it easy to swap in and out functionality from the atom database instead of having to hunt through the commit graph and create a patch.

This is a weird use case. Version control systems aren't typically used for storing and swapping around bits of functionality as a first-class, ongoing concern.

Not to mention you still need to figure out how atoms get stitched together. How do you do it without diff-based patches? No superior solution exists, AFAIK.

    tl;dr use workspaces if you're using agents.


If you have a database of atoms and plans, the V is a row in a plan table, and you reconstruct history using plan edit distance, which is more robust than manually assigned provenance anyhow (it will retain some history for cherry picked changes, for instance).

I'm sure there would be new issues, but I think they'd be at the management/ops level rather than the individual dev level, which is a win since you can concentrate specialization and let your average devs have better DX.

Is it a weird use case? Imagine you refactor some code, but then you realize that a function was being called in a slightly incorrect way after a change (prior to the refactor so the revert isn't trivial) and you have to go back and revert that change, let's say over 100 files to be fun, and let's say that the code isn't perfectly identical. With git you probably have to do surgery to create a patch, with an atomic system you can easily macro this change, or you could even expose a UI to browse different revisions of a piece of code cleanly (which would blow up with git).


if I make a plan which causes the project to be identical to its state 5 years ago, the edit distance is zero, but in no way can you call that a measure of history


You're still thinking in graphs. That plan would already exist in the database, you would just be making it a build target instead of whatever new plan was targeted before.


It seems as though you've come up with a model for representing source code repos in terms of a data model of your own design, solving problems of your own choosing. But what you describe is not a version control system in the generally agreed upon sense of the word.


It's important that jj (the local VCS) is fully compatible with Git (the protocol / interoperability standard).

I think this is JJ's biggest advantage. Whether you use it is independent of whether anybody on your team uses it.


Having used jj for the last year and a half, I hear you on the tooling issues. From IDEs, to plugins, to docs, to workflows, to LLMs, jj is not as well supported as git.

Despite that, it's still a net time-saver for me, and I suspect the same will be true for others. Git imposes constant overhead, mostly with its poor UI, but also by some of its unnecessary models (e.g., staging as a separate concept).


As far as I can tell, jj intends to be more like a cross-VCS frontend (with their own native backend coming at some point). If tooling supports jj, it would automatically support git, jj's native backend, Google's hybrid backend and any other backend a user could add.


https://xkcd.com/927/ (and now we have 15 standards)


Ok a bit off topic but isn't a 1200W PSU overkill for this system? For a 9950X and 1080, 500W seems plenty.


I'm hoping to use most of this system for the next 10 years. At some point, I want to add some beefy GPUs to it when I get back into 3D again.


I see it says Corsair so cannot tell what exact model it is but I did relatively similar thing. Reason being if you keep the load under certain wattage the PSU will run in passive cooling mode. My rig will never reach 50% of what Corsair SF750 Platinum can deliver and not mention in normal light load circumstances. It spins up its fans only when the load reaches ~300W or so.

Some people are very anal about any kind of noise coming out of their rigs. I personally undervolt everything to keep the fans at bay/minimum and having extra legroom in PSU department helps a lot


Same. Part of the background info on this is PSUs are usually most efficient at (or just under) 50% load. They are also more efficient at 240 V than 120 V, if you have the circuit. So the real efficiency can end up varying significantly, depending how you use it. The efficiency is not usually much to write home about in terms of your electric bill, but it does help drive the ability for the PSU to cool itself passively.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: