Hacker Newsnew | past | comments | ask | show | jobs | submit | jf22's commentslogin

You can tell LLMs to conform to certain quality standards and code in a way you think is best.

Gold plating code with the best quality standards takes LLMs seconds whereas it would take you days doing it by hand.


> You can tell LLMs to...

and they may or may not "listen", they are non-deterministic and have no formal means or requirements to adhere to anything you write. You know this because they violate your rules all the time in your own experience


Sure but in my experience LLMs behave much more consistently than humans with regards to quality. LLMs don't skip tests because they have to make a deadline for example.

And now we have LLMs that review LLM generated code so it's easy to verify quality standards.


At the same time, the LLM will, with some reliability, ignore the patterns or best practices in your code, or implement the same thing twice in different ways.

There are certain things they do, or don't do, that a human typically wouldn't, putting absolutes and anecdotes aside


Humans, with reliability, will ignore the patterns or best practices in your code and implement the same thing twice in different ways.

In 25 years I've never worked in a codebase where this wasn't true.


As a group or the same person under the same constraints

It's far to nuanced for generalities tbt


The current AI tools extremely good at telling you about how an existing system works. You don't need that one super knowledgeable person anymore.

With the right MCPs or additional context you can have the tools go read PRs or the last 10 tickets that impacted the system and even go out and read cloud configuration or log files to tell you about problems.

The concept of a "bus factor" is a relic of the past.


I feel the same way. AI coding skills seem to sit somewhere between intermediate and advanced. Unless you’re working in a space where you’ve thought very deeply and developed your own solutions—those “gotcha, you didn’t know this” kinds of problems—it doesn’t really feel like AI falls short.

So far, I’ve been reading through almost 100% of the code AI writes because of the traps and edge cases it can introduce. But now, it feels less like “AI code is full of pitfalls” and more like we need to focus on how to use AI properly.


I'd disagree the fervor is religious.

I think it's more frustration. Pointing out there is a maintenance cost to infrastructure is silly and doesn't add to the discussion.

We all know materials have to be shaped into machines to extract energy.


Simply ask to quantify the cost of shaping those materials into machinery, respective to other means of energy production. You will be met with hostility and scorn, accused of all sorts of improprieties, and ejected from the tribe, without ever receiving a data-supported answer.


Because it's such a weasel "just asking questions" thing to do.

If you had a concern about the material costs of renewables you should know what they are and if you wanted to have a good faith discussion, you'd also be able to compare against legacy energy material costs.


You have received data from several people in this thread alone. Have you updated your opinion accordingly?


I'm not sure what you're trying to say here. Canada can be just as corrupt.


Do you agree or disagree you provided a link to a fitness influencer's website as "evidence?"


Yes the workflow has shifted.

I've handled the sloppiest slop with llms and turned the worst code into error free modern and tested code in a fraction of the time it used to take me.

People aren't worried about cost because $1k in credits to get 6 months of work done is a no brainer.

A year from not semi-autonomous llms will produce entire applications while we sleep. We'll all be running multi-agents and basically write specs and md files all day.


So? Getting a months' worth of junior level code in an hour is still unbelievable.


Whats the improvement here? I spend more time fixing it then doing it myself anyways. And I have less confidence in the code Opus generates


i’ve become convinced that the devs that talk about having to fix the code are the same ones that would make incredibly poor managers. when you manage a team you need to be focused on the effect of the code not the small details.

this sort of developer in a pair programming exercise would find themselves flustered at how a junior approached problem solving and just fix it themselves. i strongly suspect the loss of a feeling of control is at play here.


What are you fixing?


I just had an issue where Opus misspelled variable names between usages. These are fundamental and elementary mistakes that make me deeply distrust anything slightly more complex that comes out of it.

It's great for suggesting approaches, but the code it generates looks like it doesn't actually have understanding (which is correct).

I can't trust what it writes, and if I have to go through it all with a fine toothed comb, I may as well write the code myself.

So my conclusion is that it's a very powerful research tool, and an atrocious junior developer who has dyslexia and issues with memory.


How long does it take you to go through the code vs writing it yourself?


Idk I've forgotten a lot.


I already miss the fun heads down days of unraveling complex bugs.

Now I'm just telling AI what to do.


AI's are amazing at understanding even the utterly horrific bad code.

I've refactored the sloppiest slop with AI in days with zero regressions. If I did it manually it could have taken months.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: