Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What about when they say 'when I "stare at a mess of unfamiliar code" I will ask ChatGPT to explain it and it will fix the problem for me'?



If it works I'll say "I can automate you away". If it doesn't work I'll say "You checked in that code without understanding what it does?"

In which scenario does the cheater win?


> If it works I'll say "I can automate you away".

I don't think deciding not to use LLMs makes you any more immune from automation, because other devs will.

If anything, assuming it really does increase productivity, it'd seem to me that the devs using LLMs would be safer than devs in the same domain that refuse.

> If it doesn't work

The extent to which a human using an LLM still produces buggy code should already be taken into account by assignments.


I'll answer my own rhetorical question: the human wins when (s)he creates value; by asking the right questions (crafting prompts better than the next person) and by recognizing deficiencies in the computer's output.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: