Yeah you are right, I am underselling myself in terms of just watering it down to LOC. But I was mostly talking about tangible outcomes that are obvious.
> AI can write 15k lines of code, but it cannot take Liability for a single one.
The addiction to "visible output" (like LOC) is hard to break because it feels like work. But in the AI era, "Judgment" is the new labor.
Think of it like a traditional Japanese Hanko (seal). The value isn't in the paper or the ink (which are cheap/commodities), but in the authority of the stamp that guarantees the content.
Your "tangible result" is no longer the code itself, but the trust that comes from your seal of approval. Keep guarding.
Yes, there will be always someone who is needed to program stuff. Totally agree with that.
But my question is "how many of those will be needed", because I am not saying that programmers are not needed.
When less numbers are needed, there will be so much competition in finding those jobs, esentially would also mean not able to find the work, as there will be always someone who would be willing to the job at lower wage and come to work with more youthful energy.
I've had a long career, and seen a number of systemic changes.
I've lived through two software "explosions" where minimal skills lead to large output. The first was web sites and the second was mobile.
Web sites are (even now) pretty easy. In the late 90's though, and early 2000's there was tremendous demand for web site creation. (Every business everywhere suddenly needed a web presence.) This lead to a massive surge in building-web-site training. No time for 3 year degree, barely time for 90 days of "click here, drag that".
So there was this huge percentage of "programmers" that had a very shallow skill set. When the bubble burst it was this group that bore the brunt.
Fast forward to 2007, and mobile apps become a "thing". Same pattern evolves, fast training, shallow understanding, apps do very little (most of the heavy lifting, if it exists at all, is on the backend.) Not a lot of time spent on UI or app flow etc.
This time around the work is also likely to be done offshore. Turns out simple skills can be taught anywhere, tiny programs can be built anywhere.
Worse, management typically didn't understand the importance of foundations like good database design, coherent code, forward thinking, maintainence etc. Programs are 10% creation, 90% maintainence (adding stuff, fixing stuff etc.) From a management point of view (and indeed from those swathes of shallow practioners) the only goal is "it works."
AI is this new (but really old) idea that shallowness is sufficient. And just like before it first replaces people who themselves have only shallow skills; who see "coding" as the goal of their job.
We are far from the end of this cycle, and who knows where it will go, but yes, those with shallow skills are likely to be first on the chopping block.
Those with better foundations (a better understanding of good and bad, perhaps with a deeper education, or deeper experience) and the ability to communicate that value to management are positioned well.
In other words, yes the demand for "lite" developers will implode. But at the same time demand for quality devs, who can tell good from bad (design, code, ui etc) goes up.
If you are a young graduate, you're going to be light on experience. If you're and older person, but had very shallow (or no) training you're easily replaced. If you think development is code, you're not gonna do well.
In truth development is not about code (and never has been). It's about all the processes that lead up to the code. Where possible (even at college level) try and focus on upskilling on "big picture" - understanding the needs of a business, the needs of the customer, the architecture and design that results in "good" or "bad".
AI is a tool. It's important to understand when it's doing good, but also when it's doing bad.
> AI is this new (but really old) idea that shallowness is sufficient.
That’s not the whole story and certainly not the core concern, which is more about developers who already have deep experience, using AI to multiply their output.
You've seen the "Dot-com" and "Mobile" cycles. This "AI cycle" feels faster, but the trap is the same: Mistaking Access for Mastery.
In Japanese martial arts, we have "Shuhari" (Obey, Digress, Separate).
AI gives everyone a shortcut to the final stage ("Look, I made an app!"), skipping the painful "Obey" stage where you learn why things break.
As you said, when the bubble bursts, only those who understand the "Foundation" (database design, consistency) will remain standing. The tools change, but the physics of complexity do not.
I agree with the takes, but my only question would be.
If everyone is doing high level stuff like architecture and design, how many of "those people" will be really needed in the long term? My intuition is telling me the size of market needing number of engineers will shrink.
Of course it will shrink! Every industry ever has shrunk as tooling got better.
That said, we are a long way from "peak software". There is a lot of scope for new things, so there's room for a lot of high-level people.
And of course the vast majority of current juniors won't step up at all. Just like the web site devs of the early '00s went off to be estate agents or car salesmen or whatever. Those with shallow training are easily replaced.
The wheel will turn though, and those with a quality, deep, education focused on fundamentals (not job-training-in-xxx-language) are best placed to rise up.
My take: The market for "Coders" will shrink, but the market for "Problem Solvers who use Logic" will explode.
Think of "Scribes" (people who wrote letters for others) in the past. When literacy became universal, the job of "Scribe" vanished. But the amount of writing in the world increased billion-fold.
Engineering is becoming the new literacy. We won't be "Engineers" anymore; we will just be "People who build things." The title disappears, but the capability becomes universal.
The process you have described for Codex is scary to me personally.
it takes only one extra line of code in my world(finance) to have catastrophic consequences.
even though i am using these tools like claude/cursor, i make sure to review every small bit it generated to a level, where i ask it create a plan with steps, and then perform each step, ask me for feedback, only when i give approval/feedback, it either proceeds for the next step or iterate on previous step, and on top of that i manually test everything I send for PR.
because there is no value in just sending a PR vs sending a verified/tested PR
with that said, I am not sure how much of your code is getting checked in without supervision, as it's very difficult for people to review weeks worth of work at a time.
Heya, I’m the author of the post! To be clear I have AI write probably 95% of my code these days, but I review every line of code that AI writes to make sure it meets my high standards. The same rules I’ve always had still apply — to quote @simonw “your job is to deliver code you have proven to work”.
So while I’m enthusiastic about AI writing my code in the literal sense, it’s still my code to understand and maintain. If I can’t do that then I work with AI to understand what was written — and if I can’t then I’ll often give it another go with another approach altogether so I can generate something I can understand. (Most of the time working together to understand the code works better, because I love to learn and am always open to pushing my boundaries to grow — and this process can tuned well to self-directed learning.)
And to quote a recent audit: “this is probably one of the cleanest codebases I’ve ever audited.” I say that emphasize the fact that I care a lot about the code that goes into my codebase, and I’m not interested in building layers of unchecked AI slop for code that goes into my apps.
personally it would be too difficult for me to understand large chunks of work, like in your case "a week's worth of code" at a time. just wondering how do you go about it?
second, how do you pass such large PR's to your co-workers?(if you have any)
So I will state upfront that my current experience is not the most common team dynamic because I'm an indie developer [^1]. But I've worked at many companies — as small as 2 and as large as Twitter — so I am very familiar with the variety of engineering processes.
I can share how I work with agentic systems, because I (and now others) have found it to be very effective. I still have the engineering-like experience of thinking deeply — I've gotten great results across codebases small and large — and almost everyone who I've run a workshop with has come back to me and said that this was a missing piece for them when they work with agentic systems.
I'm the kind of person I alluded to at the end of my blog post when I wrote "Some people couldn’t start coding until they had a checklist of everything they needed to do to solve a problem.", so this description will be representative of that.
1. I start a document in Craft [^2] whenever I think of a great feature, and keep adding to that doc over the next few months whenever I have a new idea. I try to turn that document into something cohesive — imagine something like a PRD without the formality.
2. Then when it comes time to build the feature, I will just sit and write out a prompt (with lots of pointers to source code and relevant screenshots) that considers everything that needs to be built. I'll write out our goals for the feature, how the client should work, how the server should behave, the expected user experience, and anything else that's relevant. That process is really clarifying because it unearths a whole bunch of meaningful context — and context is exactly what a large language model needs!
3. Last but not least I'll simply add something like "Please ask any clarifying questions you may have, or for any additional details that you may find helpful". That leads to questions which I spend anywhere from another 5 to 30 minutes on, which fills in the gaps that I hadn't even considered to consider. And sure that may take time, but now the model has *so many useful details* that most people never add to their context window.
4. Once you have that, the model can act much more surgically than the experience most people have with agentic systems. Since it's so surgical I can go do something else like work on my newsletter, my AI workshops, or even go for a walk. This is why I much prefer to work this way, as opposed to the hands-on process I described Claude Code users [often] preferring in the blog post. (Which as I mentioned there is perfectly fine, just not my cup of tea anymore.)
---
I'd still like to touch on working with people though. I do quite a bit of open source work and there I still follow what people would consider standard processes and best practices. If I'm doing a week's worth of work I still don't want to dump a whole ton of code in one commit, so I'll break everything down into very atomic commits that spell out exactly what I'm doing. I also write lots of documentation, update references, and add tests like a person should.
But there's also nothing to say you have to generate a week's worth of code in one go. It's important to remember that you're in control of how you work. It may be more fitting to define smaller tasks (which will take less time for each independent step) and work on them serially, which you can then hand off to your coworkers one by one.
Ultimately my message is that people still need to exercise their best judgment and think for themselves. AI doesn't change what we've come to accept as best practices, it automates and accelerates them. In fact, the models keep getting better the more they are trained on our best practices, so my assertion is that success using AI seems to correlate well with autonomy, creativity, and critical thinking skills.
Anyhow, long answer for a short question — but I hope it helps! And if there's anything unclear: please ask any clarifying questions you may have, or for any additional details that you may find helpful.
From your point of view, what's the approach taken by someone who rose to the rank? Is it mostly people and process management and less to do with tech?
Accomplished Senior Engineer with over 9 years of experience designing scalable Ruby on Rails applications, RESTful APIs, and microservices for high-traffic systems.
Skilled in AI-driven solutions, including LLMs and AI Agents, with
expertise in architecting robust systems and optimizing performance.
> AI can write 15k lines of code, but it cannot take Liability for a single one.
Thanks for writing this, I needed it.
reply