YMMV, but I've found that I actually do way more of that type of "thinking hard" thanks to LLMs. With the menial parts largely off my plate, my attention has been freed up to focus on a higher density of hard problems, which I find a lot more enjoyable.
Yup, there is a surprisingly high amount of boilerplate in programming, and LLMs definitely can remove this and let you focus on the more important problems. For a person with a day job, working on side projects actually became fun with LLMs again, even with the limitation of free time and mental energy to invest in.
The way I summed it up to a friend recently is that Gemini 3 is smarter but Grok 4 works harder. Very loose approximation, but roughly maps to my experience. Both are extremely useful (as is GPT-5.2), but I use them on different tasks and sometimes need to manage them a bit differently.
After a certain point, someone else's insistence on self-harm ceases to be a good excuse to infringe on my freedom. We don't ban hammers because some people accidentally damage their property/body, and it's a lot easier to do that with a hammer than an unlocked bootloader.
I'd also add that I just don't like the idea in principle that I should have to trust the agent not to act maliciously. If an agent can run rm -rf / in an extreme edge case, theoretically it could also execute a container escape.
Maybe vanishingly unlikely in practice, but it costs me almost nothing to use a VM just in case. It's not impossible that certain models turn out to be poorly behaved, that attackers successfully execute indirect prompt injection via malicious tutorials targeting coding agents, or that some shadowy figure runs a plausibly deniable attack against me through an LLM API.
I called this outcome the second I saw the title of the post the other day. Granted, I have some experience in that area, as someone who once upon a time had the brilliant idea to launch a product on HN called "Napster.fm".
Here's a trivial example: https://supremecommander.ai. A raw CSS implementation of my blog's logo would have been a pain to build and maintain, but with Astro the code is relatively straightforward JS that becomes pure HTML/CSS at build-time.
The other nice thing is that you can throw all kinds of preexisting components from React/whatever into your site, and it will ship zero JS to the client until you explicitly flag a specific JS resource as an "island".
The only special thing about "islands" is that they're an escape hatch from the default behavior of JS being strictly build-time-evaluated. I found the terminology and description a little confusing at first too, because it makes it sound more special than it is. But the concept makes sense when you understand the context of Astro's intentional default behavior.
Good to know, thanks! I can't reproduce that on Android (and it's especially weird since the site has almost no JS), but I'll investigate and try to figure it out.
lol, well I'd appreciate the heads up even if it were meant to be aggressive, but thanks. Should be good now; the issue was most likely a CSS animation causing your browser tab to crash, so now it'll just turn off the animation whenever the page is quickly reloaded twice in a row.
Do multiple or all of the things mentioned in the other comments for redundancy, then set up a Delaware non-charitable purpose trust with a reasonably large endowment. Make sure your lawyers plan the trust carefully with reliable enforcement and position it to be well defended against "capriciousness"[0] claims.
Ah yeah, totally unrelated. I just thought it was kind of funny to take "Supreme Allied Commander" (like Eisenhower) and insert "AI" in there. Wasn't sure whether I preferred supremecommander.ai or supremeaicommander.com, but ultimately went with the former and set the latter as a redirect.
I see where you're coming from, and I agree with the implication that this is more of an issue for inexperienced devs. Having said that, I'd push back a bit on the "legacy" characterization.
For me, if I check in LLM-generated code, it means I've signed off on the final revision and feel comfortable maintaining it to a similar degree as though it were fully hand-written. I may not know every character as intimately as that of code I'd finished writing by hand a day ago, but it shouldn't be any more "legacy" to me than code I wrote by hand a year ago.
It's a bit of a meme that AI code is somehow an incomprehensible black box, but if that is ever the case, it's a failure of the user, not the tool. At the end of the day, a human needs to take responsibility for any code that ends up in a product. You can't just ship something that people will depend on not to harm them without any human ever having had the slightest idea of what it does under the hood.
Some of those words appear in my comment, but not in the way you're implying I used them.
My argument was that 1) LLM output isn't inherently "legacy" unless vibe coded, and 2) one should not vibe code software that others depend on to remain stable and secure. Your response about "abandonware" is a non sequitur.
I presume that through some process one can exorcise the legacy/vibe-codiness away. Perhaps code review of every line? (This would imply that the bottleneck to LLM output is human code review.) Or would having the LLM demonstrate correctness via generated tests be sufficient?
Just to clarify, you're inferring several things that I didn't say:
* I was agreeing with you that all vibe code is effectively legacy, but obviously not all legacy code is vibe code. Part of my point is also that not all LLM code is vibe code.
* I didn't comment on the dependability of legacy code, but I don't believe that strict vibe code should ever be depended on in principle.
As far as non-vibe coding with LLMs, I'd definitely suggest some level of human review and participation in the overall structure/organization. Even if the developer hasn't pored through it line by line, they should have signed off on the tech stack/dependencies/architecture and have some idea of what the file layout and internal modules/interfaces look like. If a major bug is ever discovered, the developer should know enough to confidently code review the fix or implement it by hand if necessary.
Take responsibility by leaving a good documentation of your code and a beefy set of tests, future agents and humans will have a point to bootstrap from, not just plain code.
reply