At $JOB IT actually bundles uBlock in all the browsers available to us, as per CIA (or one of those 3-letter agencies, might've even been the NSA) guidelines it's a very important security tool. I work in banking.
Moreover you don’t even need a 0-day to fall for phishing. All you need is to be a little tired or somehow not paying attention (inb4 “it will never happen to ME, I am too smart for that”)
If you state “in 6 months AI will not require that much knowledge to be effective” every year and it hasn’t happened yet then every time it has been stated has been false up to this point.
In 6 months we can come back to this thread and determine the truth value for the premise. I would guess it will be false as it has been historically so far.
> If you state “in 6 months AI will not require that much knowledge to be effective” every year and it hasn’t happened yet then every time it has been stated has been false up to this point
I think that this has been true, though maybe not quiet a strongly as strongly worded as your quote says it.
The original statement was "Maybe GP is right that at first only skilled developers can wield them to full effect, but it's obviously not going to stop there."
"full effect" is a pretty squishy term.
My more concrete claim (and similar to "Ask again in 6 months. A year.") is the following.
With every new frontier model released [0]:
1. the level of technical expertise required to achieve a given task decreases, or
2. the difficulty/complexity/size of a task that a inexperienced user can accomplish increases.
I think either of these two versions is objectively true looking back and will continue being true going forward. And, the amount that it increases by is not trivial.
[0] or every X months to account for tweaks, new tooling (Claude Code is not even a year old yet!), and new approaches.
The transition from assembly to C, as I remember it, didn't involve using automated IP theft of scraped licensed source code to generate slop that no human has understood up until it's thrown at a code reviewer, though.
Six months ago, we _literally did not have Claude Code_. We had MCP, A2A and IDE integrations, but we didn't have an app where you could say "build me an ios app that does $thing" and have it build the damn thing start to finish.
Three months ago, we didn't have Opus 4.5, which almost everyone is saying is leaps and bounds better than previous models. MCP and A2A are mostly antiquated. We also didn't have Claude Desktop, which is trying to automate work in general.
Three _weeks_ ago, we didn't have Clawdbot/Openclaw, which people are using to try and automate as much of their lives as possible...and succeeding.
Things are changing outrageously fast in this space.
I’m not worried about being a good dev or not but these AI things thoroughly take away from the thing I enjoy doing to the point I’d consider leaving the industry entirely
I don’t want to wrangle LLMs into hallucinating correct things or whatever, I don’t find that enjoyable at all
I've been through a few cycles of using LLMs and my current usage does scratch the itch. It doesn't feel like I've lost anything. The trick is I'm still programming. I name classes and functions. I define the directory structure. I define the algorithms. By the time I'm prompting an LLM I'm describing how the code will look and it becomes a supercharged autocomplete.
When I go overboard and just tell it "now I want a form that does X", it ends up frustrating, low-quality, and takes as long to fix as if I'd just done it myself.
YMMV, but from what I've seen all the "ai made my whole app" hype isn't trustworthy and is written by people who don't actually know what problems have been introduced until it's too late. Traditional coding practices still reign supreme. We just have a free pair of extra eyes.
I also use AI to give me small examples and snippets, this way it works okay for me
However this still takes away from me in the sense that working with people who are using AI to output garbage frustrates me and still negatively impacts the whole craft for me
Having bad coworkers who write sloppy code isn't a new problem, and it's always been a social problem rather than a technical one. There was probably a lot less garbage code back when it all only ran on mainframes because fewer people having access meant that only the best would get the chance, but I still think that opening that up has been a net benefit for the craft as a whole.
Before there was some understanding that at least they wrote and understood their own garbage code
Now it is not true. Someone can spend a few minutes generating a non-sense change and push for review. I will have to spend a non-trivial amount of time to even know it’s non-sense.
This problem is already impacting projects like curl who just recently closed their bug bounty because of low-effort AI generated PRs
> Before there was some understanding that at least they wrote and understood their own garbage code
> Now it is not true. Someone can spend a few minutes generating a non-sense change and push for review. I will have to spend a non-trivial amount of time to even know it’s non-sense.
The problem sounds basically the same to me honestly. If someone submits code that I can't understand and asks me to review it, the onus on them to explain it. In the previous case, maybe they could, but if they can't now, the review is blocked on them figuring out how to deal with that. If that's not what's happening, it sounds more like an process or organizational problem that wouldn't be possible to fix with the presence or absence of tooling.
> This problem is already impacting projects like curl who just recently closed their bug bounty because of low-effort AI generated PRs
External contributions are a bit of a different problem IMO. I'd argue that open source maintainers have never had any obligation to accept or review external PRs though. Low effort PRs can be closed immediately with no explanation, and that's fine. It's also totally possible and acceptable to limit PRs to only people explicitly listed as contributors. I've even seen projects hosted on their own git infrastructure that don't allow signing up through the web UI so that you can only view everything in the browser (and of course clone the repo, which already isn't something that requires credentials for public git servers).
I guess my overall point is that the changes are more social than technical, and that this isn't the first time that there was a large social shift in how development worked (and likely won't be the last one either). I think viewing it through the lens of "before good, after bad" is reductive because of how it implies that the current changes are so large that everything else beforehand was similar enough to gloss over what had been changing over time already. I'm not convinced that the differences in how programming was achieved socially and technically between 43 years ago (when the author says they started programming) and the dawn of LLM coding assistants were obviously smaller than the new changes that having AI coding tools have introduced, but that isn't reflected by the level of cynicism in most of these discussions.
I see comments like this a lot. In fact, I've run into it in my own side projects that I work on by myself -- what is this slop and how do i fix it? I only have myself to blame.
I can't speak to open source orgs like curl, but at least at the office, the company should invest time in educating engineers on how to use AI in a way that doesn't waste everyone's time. It could be introducing domain-specific skills, rules that ensure TDD is followed, ADRs are generated, work logs, etc.
I found that when I started implementing workflows like this, slop was less and if anyone wanted to know "why did we do it like X" then we can point to the ADR and show what assumptions were made. If an assumption was fundamentally wrong, we can tell the agent to fix the assumption and fix the issue (and of course leave a paper trail).
Engineers who waste other engineers' time reviewing slop PRs should just be fired. AI is no excuse to start producing bad code. The engineer should still be responsible for the code they ship.
Serious question: so what then is the value of using an LLM? Just autocomplete? So you can use natural language? I'm seriously asking. My experience has been frustrating. Had the whole thing designed, the LLM gave me diagrams and code samples, had to tell it 3 times to go ahead and write the files, had to convince it that the files didn't exist so it would actually write them. Then when I went to run it, errors ... in the build file ... the one place there should not have been errors. And it couldn't fix those.
The value is pretty similar to autocomplete in that sometimes it's more efficient than manually typing everything out. Sometimes the time it takes try select the right thing the complete would take longer to type manually, and you do it that way instead, and sometimes what you want isn't even going to be something you can autocomplete at all so you do it manually because of that.
Like autocomplete, it's going to work best if you already know what the end state should be and are just using it as a quicker way of getting there. If you don't already know what you're trying to complete, you might get lucky by just tabbing through to see if you find the right result, or you might spend a bunch of time only to find out that what you wanted isn't coming up for what you've typed/prompted and you're back to needing to figure out how to proceed.
There are times where there are known bugs in Debian which are purposely not fixed but instead documented and worked around. That’s part of the stability promise. The behaviour shall not change which sometimes includes “bug as a feature”
Sure, but in another time you'd have paid ~$2.99 for the ad-free version one-time, and carried on using it. They intentionally deleted that version of the game, screwing over everyone who did so, then quietly launched the same game again, removing the ad-free one time purchase option.
If a sandbox is optional then it is not really a good sandbox
naturally even flatpak on Linux suffers from this as legacy software simply doesn’t have a concept of permission models and this cannot be bolted on after the fact
The containers are literally the "bolting on". You need to give the illusion of the software is running under a full OS but you can actually mount the system directories as read-only.
and you still need to mount volumes and add all sorts of holes in the sandbox for applications to work correctly and/or be useful
try to run gimp inside a container for example, you’ll have to give access to your ~/Pictures or whatever for it to be useful
Compared to some photo editing applications on android/iOS which can work without having filesystem access by getting the file through the OS file picker
What we need is a model similar to Google+ circles if anyone can remember that.
Basically a thing that I could assign 1) apps and 2) content to. Apps can access all content in all circles they are assigned to. Circles can overlap arbitrarily so you can do things like having apps A,B,C share access to documents X,Y but only A,B have access to Z etc.
Thanks! I've been using the -F{} do-something-tofile "{}" approach which is also handy for times in which the input is one pram among others. -0 is much faster.
Edit:
Looks like when doing file-by-file -F{} is still needed:
# find tmp -type f | xargs -0 ls
ls: cannot access 'tmp/b file.md'$'\n''tmp/a file.md'$'\n''tmp/c file.md'$'\n': No such file or directory
not even joking
reply