Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

AI tools seem to be most useful for little things. Fixing a little bug, making a little change. But those things aren’t always very visible or really move the needle.

It may help you build a real product feature quicker, but AI is not necessarily doing the research and product design which is probably the bottleneck for seeing real impact.



If they're fixing all the little bugs that should give everyone much more time to think about product design and do the research.


Or a lot of small fixes all over the place. Yet in reality we dont see this anywhere, not sure what exactly that means.

Maybe overall complexity creeping up rolls over any small gains, or devs are becoming more lazy and just copy paste llms output without a serious look at it?

My company didnt even adapt or allow use of llms in any way for anything so far (private client data security is more important than any productivity gains, which anyway seems questionable when looking around.. and serious data breaches can end up with fines in hundreds of millions ballpark easily).


It’s also possible that all of these gains fixing bugs are simply improving infrastructure and stability rather than finding new customers and opening up new markets.

Having worked on software infrastructure, it’s a thankless job. You’re most heroic work has little visibility and the result is that nothing catastrophic happened.

So maybe products will have better reliability and fewer bugs? And we all know there’s crappy software that makes tons of money, so there isn’t necessarily a strong correlation.


Assuming a well functioning business, yes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: