> Any self respecting engineer should recognize that these tools and models only serve to lower the value of your labor.
Depends on what the aim of your labor is. Is it typing on a keyboard, memorizing (or looking up) whether that function was verb_noun() or noun_verb(), etc? Then, yeah, these tools will lower your value. If your aim is to get things done, and generate value, then no, I don't think these tools will lower your value.
This isn't all that different from CNC machining. A CNC machinist can generate a whole lot more value than someone manually jogging X/Y/Z axes on an old manual mill. If you absolutely love spinning handwheels, then it sucks to be you. CNC definitely didn't lower the value of my brother's labor -- there's no way he'd be able to manually machine enough of his product (https://www.trtvault.com/) to support himself and his family.
> Using these things will fry your brain's ability to think through hard solutions.
CNC hasn't made machinists forget about basic principles, like when to use conventional vs climb milling, speeds and feeds, or whatever. Same thing with AI. Same thing with induction cooktops. Same thing with any tool. Lazy, incompetent people will do lazy, incompetent things with whatever they are given. Yes, an idiot with a power tool is dangerous, as that tool magnifies and accelerates the messes they were already destined to make. But that doesn't make power tools intrinsically bad.
> Do you want your competency to be correlated 1:1 to the quality and quantity of tokens you can afford (or be loaned!!)?
We are already dependent on electricity. If the power goes out, we work around that as best as we can. If you can't run your power tool, but you absolutely need to make progress on whatever it is you're working on, then you pick up a hand tool. If you're using AI and it stops working for whatever reason, you simply continue without it.
I really dislike this anti-AI rhetoric. Not because I want to advocate for AI, but because it distracts from the real issue: if your work is crap, that's on you. Blaming a category of tool as inherently bad (with guaranteed bad results) suggests that there are tools that are inherently good (with guaranteed good results). No. That's absolutely incorrect. It is people who fall on the spectrum of mediocrity-to-greatness, and the tools merely help or hinder them. If someone uses AI and generates a bunch of slop, the focus should be on that person's ineptitude and/or poor judgement.
We'd all be a lot better off if we held each other to higher standards, rather than complaining about tools as a way to signal superiority.
Your brother's livelihood is not safe from AI, nor is any other livelihood. A small slice of lucky, smart, well-placed, protected individuals will benefit from AI, and I presume many unlucky people with substantial disabilities or living in poverty will benefit as well. Technology seems to continue the improve the outcomes at the very top and very bottom, while sacrificing the biggest group in the middle.
Many HN Software Engineers here immensely benefitted from Big Tech over the past 15 years -- they were a part of that lucky privileged group winning 300k+ USD salaries plus equity for a long time. AI has completely disrupted this space and drastically decreased the value of their work, and it largely did this by stealing open source code for training data. These Software Engineers are right to feel upset and threatened and oppose these AI tools, since they are their replacement. I believe that is why you see so much AI hate in HN
I'm not trying to signal superiority, I'm legitimately worried about the value of my livelihood and skills I'm passionate about. What if McDonalds went around telling chefs that they're cooking wrong, that there's no reason to cook food in a traditional manner when you can increase profit and speed with their methods?
It would be insulting, they'd get screamed out of the kitchen. Now imagine they're telling those chefs they're going to enforce those methods on them regardless whether they like it or not.
Vertical CNC mills and CNC lathes are, obviously, different machines with different use cases. But if you compare within the categories, the designs are almost all conceptually the same.
So, what about outside of some set of categories? Well, generally, no such thing exists: new ideas are extremely rare.
Anyone who truly enjoys entering code character for character, refusing to use refactoring tools (e.g. rename symbol), and/or not using AI assistance should feel free to do so.
I, on the other hand, want to concern myself with the end product, which is a matter of knowing what to build and how to build it. There’s nothing about AI assistance that entails that one isn’t in the driver’s seat wrt algorithm design/choices, database schema design, using SIMD where possible, understanding and implementing protocols (whether HTTP or CMSIS-DAP for debugging microcontrollers over USB JTAG probe), etc, etc.
AI helps me write exactly what I would write without it, but in a fraction of the time. Of course, when the rare novel thing comes up, I either need to coach the LLM, or step in and write that part myself.
But, as a Staff Engineer, this is no different than what I already do with my human peers: I describe what needs doing and how it should be done, delegate that work to N other less senior people, provide coaching when something doesn’t meet my expectations, and I personally solve the problems that no one else has a chance of beginning to solve if they spent the next year or two solely focused on it.
Could I solve any one of those individual, delegated tasks faster if I did it myself? Absolutely. But could I achieve the same progress, in aggregate, as a legion of less experienced developers working in parallel? No.
LLM usage is like having an army of Juniors. If the result is crap, that’s on the user for their poor management and/or lack of good judgement in assessing the results, much like how it is my failing if a project I lead as a Staff Engineer is a flop.
Motions can be inclusive or exclusive. It works like the different ways of annotating ranges: [0,1] and (0,1).
Consider the command `d` (delete) combined with the motions for `"`.
First we have `da"`, it deletes the everything between the pair of `"` characters that surround my cursor. Next, `di"` deletes the contents of the `"` pair.
The movement `a"` is inclusive (think 'a quote') and `i"` is exclusive (think 'inside quote'). Combined with the command you get "delete a quote" and "delete inside quote" when the mnemonics are spelled out.
oh, wow, great info, thanks. i knew about the general concept from high school math (where it is called open and closed intervals) and also about Python ranges, but didn't know about it in connection with vim. Got it now.
Do you take issue with companies stating that they (the company) built something, instead of stating that their employees built something? Should the architects and senior developers disclaim any credit, because the majority of tickets were completed by junior and mid-level developers?
Do you take issue with a CNC machinist stating that they made something, rather than stating that they did the CAD and CAM work but that it was the CNC machine that made the part?
Non-zero delegation doesn’t mean that the person(s) doing the delegating have put zero effort into making something, so I don’t think that delegation makes it dishonest to say that you made something. But perhaps you disagree. Or, maybe you think the use of AI means that the person using AI isn’t putting any constructive effort into what was made — but then I’d say that you’re likely way overestimating the ability of LLMs.
Could we please avoid the strawmen? Nowhere have I claimed that they didn't put work into this. Nowhere did I say that delegation is bad. I'd like to encourage a discussion, but then please counter the opinion that I gave, not a made-up one that I neither stated nor actually hold.
We all agree that crafting the right prompts (or however we call the CLAUDE.md instructions) is a lot of work, don't we? Of course they put work into this, it's a file of substantial size. And then Claude used it to build the thing. Where is a contradiction? I don't see the mental gymnastics, sorry.
Let me rephrase GP into (I hope) a more useful analogy.
— actually, here’s the whole analogous exchange:
“A rectangle is an equal-sided rectangle (i.e. “square”) though. That’s what the R stands for.”
“No? Why would you think a rectangle is a square?”
Just as not all rectangles are squares (squares are a specific subset of rectangles), not all datagram protocols are UDP (UDP is just one particular datagram protocol).
The obvious answer is "I didn't know datagrams were a superset of UDP". I don't really understand how "how do you not know this" is a reasonable or useful question to ask.
You read it that way because that’s the sensible way to read it. Everyone suggesting you missed the plot is in turn making a rather large logical leap.
What whoknowsidont is trying to say (IIUC): the models aren't trained on particular MCP use. Yes, the models "know" what MCP is. But the point is that they don't necessarily have MCP details baked in -- if they did, there would be no point in having MCP support serving prompts / tool descriptions.
Well, arguably descriptions could be beneficial for interfaces that let you interactively test MCP tools, but that's certainly not the main reason. The main reason is that the models need to be informed about what the MCP server provides, and how to use it (where "how to use it" in this context means "what is the schema and intent behind the specific inputs/outputs" -- tool calls are baked into the training, and the OpenAI docs give a good example: https://platform.openai.com/docs/guides/function-calling).
I suppose someone could try to abuse MCP by stuffing information about REST API endpoints into a the prompt/descriptions in a small MCP "skeleton" service, but I don't know of any. Can you provide examples?
> they just teach them how to use the tools and then the LLM calls the APIs directly as any HTTP client would.
I suspect you might have some deep misunderstandings about MCP.
When an MCP tool is used, all of the output is piped straight into the LLM's context. If another MCP tool is needed to aggregate/filter/transform/etc the previous output, the LLM has to try ("try" is a keyword -- LLMs are by their nature nondeterministic) and reproduce the needed bits as inputs into the next tool use. This increases latency dramatically and is an inefficient use of tokens.
This "a1" project, if I'm reading it correctly, allows for pipelining multiple consecutive tool uses without the LLM/agent being in the loop, until the very end when the final results are handed off to the LLM.
Depends on what the aim of your labor is. Is it typing on a keyboard, memorizing (or looking up) whether that function was verb_noun() or noun_verb(), etc? Then, yeah, these tools will lower your value. If your aim is to get things done, and generate value, then no, I don't think these tools will lower your value.
This isn't all that different from CNC machining. A CNC machinist can generate a whole lot more value than someone manually jogging X/Y/Z axes on an old manual mill. If you absolutely love spinning handwheels, then it sucks to be you. CNC definitely didn't lower the value of my brother's labor -- there's no way he'd be able to manually machine enough of his product (https://www.trtvault.com/) to support himself and his family.
> Using these things will fry your brain's ability to think through hard solutions.
CNC hasn't made machinists forget about basic principles, like when to use conventional vs climb milling, speeds and feeds, or whatever. Same thing with AI. Same thing with induction cooktops. Same thing with any tool. Lazy, incompetent people will do lazy, incompetent things with whatever they are given. Yes, an idiot with a power tool is dangerous, as that tool magnifies and accelerates the messes they were already destined to make. But that doesn't make power tools intrinsically bad.
> Do you want your competency to be correlated 1:1 to the quality and quantity of tokens you can afford (or be loaned!!)?
We are already dependent on electricity. If the power goes out, we work around that as best as we can. If you can't run your power tool, but you absolutely need to make progress on whatever it is you're working on, then you pick up a hand tool. If you're using AI and it stops working for whatever reason, you simply continue without it.
I really dislike this anti-AI rhetoric. Not because I want to advocate for AI, but because it distracts from the real issue: if your work is crap, that's on you. Blaming a category of tool as inherently bad (with guaranteed bad results) suggests that there are tools that are inherently good (with guaranteed good results). No. That's absolutely incorrect. It is people who fall on the spectrum of mediocrity-to-greatness, and the tools merely help or hinder them. If someone uses AI and generates a bunch of slop, the focus should be on that person's ineptitude and/or poor judgement.
We'd all be a lot better off if we held each other to higher standards, rather than complaining about tools as a way to signal superiority.
reply