and they may or may not "listen", they are non-deterministic and have no formal means or requirements to adhere to anything you write. You know this because they violate your rules all the time in your own experience
Sure but in my experience LLMs behave much more consistently than humans with regards to quality. LLMs don't skip tests because they have to make a deadline for example.
And now we have LLMs that review LLM generated code so it's easy to verify quality standards.
At the same time, the LLM will, with some reliability, ignore the patterns or best practices in your code, or implement the same thing twice in different ways.
There are certain things they do, or don't do, that a human typically wouldn't, putting absolutes and anecdotes aside
The current AI tools extremely good at telling you about how an existing system works. You don't need that one super knowledgeable person anymore.
With the right MCPs or additional context you can have the tools go read PRs or the last 10 tickets that impacted the system and even go out and read cloud configuration or log files to tell you about problems.
The concept of a "bus factor" is a relic of the past.
I feel the same way. AI coding skills seem to sit somewhere between intermediate and advanced. Unless you’re working in a space where you’ve thought very deeply and developed your own solutions—those “gotcha, you didn’t know this” kinds of problems—it doesn’t really feel like AI falls short.
So far, I’ve been reading through almost 100% of the code AI writes because of the traps and edge cases it can introduce. But now, it feels less like “AI code is full of pitfalls” and more like we need to focus on how to use AI properly.
Simply ask to quantify the cost of shaping those materials into machinery, respective to other means of energy production. You will be met with hostility and scorn, accused of all sorts of improprieties, and ejected from the tribe, without ever receiving a data-supported answer.
Because it's such a weasel "just asking questions" thing to do.
If you had a concern about the material costs of renewables you should know what they are and if you wanted to have a good faith discussion, you'd also be able to compare against legacy energy material costs.
I've handled the sloppiest slop with llms and turned the worst code into error free modern and tested code in a fraction of the time it used to take me.
People aren't worried about cost because $1k in credits to get 6 months of work done is a no brainer.
A year from not semi-autonomous llms will produce entire applications while we sleep. We'll all be running multi-agents and basically write specs and md files all day.
i’ve become convinced that the devs that talk about having to fix the code are the same ones that would make incredibly poor managers. when you manage a team you need to be focused on the effect of the code not the small details.
this sort of developer in a pair programming exercise would find themselves flustered at how a junior approached problem solving and just fix it themselves. i strongly suspect the loss of a feeling of control is at play here.
I just had an issue where Opus misspelled variable names between usages. These are fundamental and elementary mistakes that make me deeply distrust anything slightly more complex that comes out of it.
It's great for suggesting approaches, but the code it generates looks like it doesn't actually have understanding (which is correct).
I can't trust what it writes, and if I have to go through it all with a fine toothed comb, I may as well write the code myself.
So my conclusion is that it's a very powerful research tool, and an atrocious junior developer who has dyslexia and issues with memory.
Gold plating code with the best quality standards takes LLMs seconds whereas it would take you days doing it by hand.
reply