I recently asked Claude to make some kind of simple data structure and it responded with something like "You already have an abstraction very similar to this in SourceCodeAbc.cpp line 123. It would be trivial to refactor this class to be more generic. Should I?" I was pretty blown away. It was like a first glimpse of an LLM play-acting as someone more senior and thoughtful than the usual "cocaine-fueled intern."
HA is not about exceeding the limits of a server. Its about still serving traffic when that best server I bought goes offline (or has failed memory chip, or a disk or... ).
Postgres replication, even in synchronous mode, does not maintain its consistency guarantees during network partitions. It's not a CP system - I don't think it would actually pass a Jepsen test suite in a multi-node setup[1]. No amount of tooling can fix this without a consensus mechanism for transactions.
Same with MySQL and many other "traditional" databases. It tends to work out because these failures are rare and you can get pretty close with external leader election and fencing, but Postgres is NOT easy (likely impossible) to operate as a CP system according to the CAP theorem.
There are various attempts at fixing this (Yugabyte, Neon, Cockroach, TiDB, ...) which all come with various downsides.
It truly is. I've seen multiple pre-seed startups and founders burn significant amounts of personal capital in order to land an O-1A visa because of how much capital and mentorship is available here.
In my own anecdotal experience Claude Code found a bug in production faster than I could. I was the author of the said code, that was written 4 years ago by hand. GPs claim perhaps is not all that unsubstantiated. My role is moving more towards QA/PM nowadays.
For sure. Not hard fails, but bad fixes. It confidently thought it fixed a bug, but it really didn't. I could only tell (it was fairly complex), because I tried reproducing it before/after. Ultimately I believe there was not sufficient context provided to it. It has certainly failed to do what I asked it to do in round 1, round 2, but eventually got it right (a rendering issue for a barcode designer).
These incidents have been less and less over the last year - switching it Opus made failure frequencies less. Same thing for code reviews. Most of it is fluff, but it does give useful feedback, if the instructions are good. For example, I asked for a blind code review of a PR ("Review this PR"), and it gave some generic commentary. I made the prompt more specific ("Follow the API changes across modules and see impact") - it found a serious bug.
The number of times I had to give up in frustration has been going down over the last one year. So I tend believe a swarm of agents could do a decent job of autonomous development/maintenance over the next few years.
We have a Q&A database. The questions, answers are both trigram indexed and also have embeddings. All in postgres. We then use pgvector + trigram search, combine them by relevance scores.
I avoid rebase like plague (perhaps because of my early experiences with it). I used to get continuous conflicts for the same commits again and again, and the store and replay kinda helped with it but not always. Merge always worked for me (once I resolve conflicts, thats the end of it). Now I always merge main into my feature branch and then merge it back to main when ready. Does it pollute the history? Maybe, but Ive never looked. It does not matter to our team.
Same here. I lead a team and I really don't care about keeping the git history "clean". It is what it is. Squashing commits wouldn't help me at all since the history rarely ever factors into our workflow.
Rebase and other fancy Git things have caused problems in the past so I avoid getting too complex with Git. I'm not a Git engineer, I'm a software engineer.
Merging has always just worked, and I know exactly what to expect. If there's a big hairy branch that I need to merge, and I know there will be conflicts, I create a branch from Main, merge the hairy branch into it, and see what happens. Fix the issues there, and then merge that branch to Main when everything is working. Merge is simple, and I don't have to be master of Git to get things done.
Perhaps. But you can see the DX of rebase is abysmal compared to merge. squash, rerere, force push, remember to push to remote before rebase, more coordination if multiple people are working on feature branch etc.
I still prefer merge. Its simple and gets out of my way as long as I dont care about purity of history
reply