Hacker Newsnew | past | comments | ask | show | jobs | submit | IhateAI's commentslogin

I didn't get into creating software so I could read plagiarism laundering machines output. Sorry, miss me with these takes. I love using my keyboard, and my brain.

So you have a hobby.

I have a profession. Therefore I evaluate new tools. Agents coding I've introduced into my auxiliary tool forgings (one-off bash scripts) and personal projects, and I'm just now comfortable to introduce into my professional work. But I still evaluate every line.


I love for companies to pay me money that I can in turn exchange for food, clothes and shelter.

So then type the code as well and read it after. Why are you mad

like I often say, these tools are mostly useful for people to do magic tricks on themselves (and to convince C-suites that they can lower pay, and reduce staff if they pay Anthropic half their engineering budget lmao )

Any self respecting engineer should recognize that these tools and models only serve to lower the value of your labor. They aren't there to empower you, they aren't going to enable you to join the ruling class with some vibe-rolled slop SaaS.

Using these things will fry your brain's ability to think through hard solutions. It will give you a disease we haven't even named yet. Your brain will atrophy. Do you want your competency to be correlated 1:1 to the quality and quantity of tokens you can afford (or be loaned!!)?

Their main purpose is to convince C-suite suits that they don't need you, or they should be justified in paying you less.This will of course backfire on them, but in the meantime, why give them the training data, why give them the revenue??

I'd bet anything these new models / agentic-tools are designed to optimize for token consumption. They need the revenue BADLY. These companies are valued at 200 X Revenue.. Google IPO'd at 10-11 x lmfao . Wtf are we even doing? Can't wait to watch it crash and burn :) Soon!


People often compare working with AI agents to being something like a project manager.

I've been a project manager for years. I still work on some code myself, but most of it is done by the rest of the team.

On one hand, I have more bandwidth to think about how the overall application is serving the users, how the various pieces of the application fit together, overall consistency, etc. I think this is a useful role.

On the other hand, I definitely have felt mental atrophy from not working in the code. I still think; I still do things and write things and make decisions. But I feel mentally out of shape; I lack a certain sharpness that I perceived when I was more directly in tune with the code.

And I'm talking, all orthogonal to AI. This is just me as a project manager with other humans on the project.

I think there is truth to, well, operate at a higher level! Be more systems-minded, architecture-minded, etc. I think that's true. And there are surely interesting new problems to solve if we can work not on the level of writing programs, but wielding tools that write programs for us.

But I think there's also truth to the risk of losing something by giving up coding. Whether if that which might be lost is important to you or not, is your own decision, but I think the risk is real.


I do think there’s a real risk of Brain Atrophy when you rely on AI coding tools for everything and while learning something new. About a year ago, I dealt with this problem by using Neovim and having shortcuts like below to easily toggle GitHub Copilot on/off. Now that AI is baked into almost every part of the toolchain in VSCode, Cursor, ClaudeCode, Intellij, I don't know how the newer engineers will learn without AI assistance.

I think in-line autocomplete is likely not that dangerous, if it's used in this manner responsibly, it's the large agentic tools that are problematic for your brain imo. But in-line autocompletes aren't going to raise billions of dollars and aren't flashy.

I'd say autocomplete introduces a certain level of fuzziness into the code we work with, though to a lower degree. I used autocomplete for over a year, and initially it did feel like a productivity boost, yet when I later stopped using them, it never felt like my productivity decreased. I stopped because something about losing explicit intent of my code feels uncomfortable to me.

It's very difficult to operate effectively at a higher level for a continued period of time without periodically getting back into the lower levels to try new things and learn new approaches or tools.

That doesn't even have to be writing a ton of code, but reading the code, getting intimately familiar with the metrics, querying the logs, etc.


I definitely think what you're losing is extremely important, and can't be compensated with LLMs once its gone.

Back when automatic piano players came out, if all the world's best piano players stopped playing and mostly just composing/writing music instead, would the quality of the music have increased or decreased. I think the latter.


From an economic standpoint this is basically machines doing work humans used to do. We’ve already gone through this many times. We built machines that can make stuff orders of magnitude faster than humans, and nobody really argues we should preserve obsolete tools and techniques as a valued human craft. Obviously automation messes with jobs and identity for some people, but historically a large chunk of human labor just gets automated as the tech gets better. So I feel that arguing about whether automation is good or bad in the abstract is a bit beside the point. The more interesting question imho is how people and companies adapt to it, because it’s probably going to happen either way.

I had to create a new account, because HN is protecting their investments and basically making it impossible to post for anyone who is critical of LLMs (said I was crawling, I'm on a dedicated proxy that definitely hasn't ever crawled HN lol).

Automation can be good overall for society, but you also can't ignore the fact that basically all automation has decreased the value of the labor it replaced or subsidized.

This automation isn't necessarily adding value to society. I don't see any software being built that's increasing the quality of people's life, I don't see research being accelerated. There is no economic data to support this either. The economic gains are only reflected in the values of companies who are selling tokens, or have been able to decrease their employee-counts with token allowances.

All I see is people sharing CRUD apps on twitter, 50 clones of the same SaaS, ,people constantly complaining about how their favorite software/OS has more bugs, the cost of hardware and electricity going up and people literally going into psychosis. (I have a list of 70+ people on twitter that I've been adding too that are literally manic and borderline insane because of these tools). I can see LLMs being genuinely useful to society, like helping with real time the blind, and disabled, but noone is doing that! It doesn't make money, automation is for capital owning class, not for the working class.

But hey, at least your favorite LLM shill from that podcast you loved can afford the $20,000/night resort this summer...

I'd be more okay with these mostly useless automation tools if the models were open source and didn't require $500k to run locally, but until then they basically only serve to make existing billionaires pad unnecessary zeros onto their net worth, and help prevent anyone from catching up with them.

I recommend people read this essay by Thomas Pynchon, actually read it, don't judge it by the title: https://www.nytimes.com/1984/10/28/books/is-it-ok-to-be-a-lu...


Of course it's to save businesses money (and not to empower programmers)! Software engineers for years automated jobs of other people, but when it's SEs that are getting automated, suddenly progress becomes bad?

So because those people didn't defend their livelihoods we shouldn't either?

I'd say there's very little jobs that SWE automated away outside of SOME data entry, SWE's built abstractions on top of existing processes. LLM companies want to abstract away the human entirely.


The crash and burn can't come soon enough.

When I use Google maps, I learn faster.

And I haven't to solve real hard problems for ages.

Some people will have problems some will not.

Future will tell.


Honestly my job is to ensure code quality and to protect the customer. I love working with claude code, it makes my life easier, but in no way would a team of agents improve code quality or speed up development. I would spend far too much time reviewing and fixing laziness and bad design decisions.

When you hear execs talking about AI, it's like listening to someone talk about how they bought some magic beans that will solve all their problems. IMO the only thing we have managed to do is spend alot more money on accelerated compute.


It would be tragically ironic if this post is AI generated.

I agree on all parts. I do not understand why anyone in the software industry would bend over backwards to show their work is worth less now.

>I'd bet anything these new models / agentic-tools are designed to optimize for token consumption.

You would think, but Claude Code has gotten incredibly more efficient over time. They are doing so much dogfooding with these things at this point that it makes more sense to optimize.


How Butlerian of you.

Shaking fist at clouds!!

Wow, a bunch of NFT people used to say the same thing.

lmao, please explain to me why these companies should be valued at 200x revenue.. They are providing autocomplete APIs.

How come Google's valuation hasn't increased 100-200x, they provide foundation models + a ton more services as well and are profitable. None of this makes sense, its destined to fail.


I like your name, it suggests you're here for a good debate.

Let me start by conceding on the company value front; they should not have such value. I will also concede that these models lower your value of labor and quality of craft.

But what they give in return is the ability to scale your engineering impact to new highs - Talented engineers know which implementation patterns work better, how to build debuggable and growable systems. While each file in the code may be "worse" (by whichever metric you choose), the final product has more scope and faster delivery. You can likewise choose to hone in the scope and increase quality, if that's your angle.

LLMs aren't a blanket improvement - They come with tradeoffs.


(I had to create a new account, because HN doesn't like LLM haters (don't mess with the bag ig)

the em dashes in your reply scare me, but I'll assume you're a real person lol.

I think your opinion is valid, but tell that to the C Suite who's laid of 400k tech workers in the last 16 months in the USA. These tools don't seem to be used to empower high quality engineering, only to naively increase the bottom line by decreasing the number of engineers, and increasing workloads on those remaining.

Full disclosure, I haven't been laid off ever, but I see what's happening. I think when the trade-off is that your labor is worth a fraction of what it used to be and you're also expected to produce more, then that trade-off isn't worth it.

It would be a lot different if the signaling from business leaders was the reverse. If they believed these tools empowered labor's impact to a business, and planned on rewarding on that, it would be a different story. That's not what we are seeing, and they are very open about their plans for the future of our profession.

Automation can be good overall for society, but you also can't ignore the fact that basically all automation has decreased the value of the labor it replaced or subsidized.

This automation isn't necessarily adding value to society. I don't see any software being built that's increasing the quality of people's life, I don't see research being accelerated. There is no economic data to support this either. The economic gains are only reflected in the values of companies who are selling tokens, or have been able to decrease their employee-counts with token allowances.

All I see is people sharing CRUD apps on twitter, 50 clones of the same SaaS, ,people constantly complaining about how their favorite software/OS has more bugs, the cost of hardware and electricity going up and people literally going into psychosis. (I have a list of 70+ people on twitter that I've been adding too that are literally manic and borderline insane because of these tools).

But hey, at least your favorite AI evangelist from that podcast you loved can afford the $20,000/night resort this summer...


Google is valued at 4T. Up from 1.2T in 2022.

it's too late to hateAI!

username checks out

> Any self respecting engineer should recognize that these tools and models only serve to lower the value of your labor.

Depends on what the aim of your labor is. Is it typing on a keyboard, memorizing (or looking up) whether that function was verb_noun() or noun_verb(), etc? Then, yeah, these tools will lower your value. If your aim is to get things done, and generate value, then no, I don't think these tools will lower your value.

This isn't all that different from CNC machining. A CNC machinist can generate a whole lot more value than someone manually jogging X/Y/Z axes on an old manual mill. If you absolutely love spinning handwheels, then it sucks to be you. CNC definitely didn't lower the value of my brother's labor -- there's no way he'd be able to manually machine enough of his product (https://www.trtvault.com/) to support himself and his family.

> Using these things will fry your brain's ability to think through hard solutions.

CNC hasn't made machinists forget about basic principles, like when to use conventional vs climb milling, speeds and feeds, or whatever. Same thing with AI. Same thing with induction cooktops. Same thing with any tool. Lazy, incompetent people will do lazy, incompetent things with whatever they are given. Yes, an idiot with a power tool is dangerous, as that tool magnifies and accelerates the messes they were already destined to make. But that doesn't make power tools intrinsically bad.

> Do you want your competency to be correlated 1:1 to the quality and quantity of tokens you can afford (or be loaned!!)?

We are already dependent on electricity. If the power goes out, we work around that as best as we can. If you can't run your power tool, but you absolutely need to make progress on whatever it is you're working on, then you pick up a hand tool. If you're using AI and it stops working for whatever reason, you simply continue without it.

I really dislike this anti-AI rhetoric. Not because I want to advocate for AI, but because it distracts from the real issue: if your work is crap, that's on you. Blaming a category of tool as inherently bad (with guaranteed bad results) suggests that there are tools that are inherently good (with guaranteed good results). No. That's absolutely incorrect. It is people who fall on the spectrum of mediocrity-to-greatness, and the tools merely help or hinder them. If someone uses AI and generates a bunch of slop, the focus should be on that person's ineptitude and/or poor judgement.

We'd all be a lot better off if we held each other to higher standards, rather than complaining about tools as a way to signal superiority.


Your brother's livelihood is not safe from AI, nor is any other livelihood. A small slice of lucky, smart, well-placed, protected individuals will benefit from AI, and I presume many unlucky people with substantial disabilities or living in poverty will benefit as well. Technology seems to continue the improve the outcomes at the very top and very bottom, while sacrificing the biggest group in the middle. Many HN Software Engineers here immensely benefitted from Big Tech over the past 15 years -- they were a part of that lucky privileged group winning 300k+ USD salaries plus equity for a long time. AI has completely disrupted this space and drastically decreased the value of their work, and it largely did this by stealing open source code for training data. These Software Engineers are right to feel upset and threatened and oppose these AI tools, since they are their replacement. I believe that is why you see so much AI hate in HN

I'm not trying to signal superiority, I'm legitimately worried about the value of my livelihood and skills I'm passionate about. What if McDonalds went around telling chefs that they're cooking wrong, that there's no reason to cook food in a traditional manner when you can increase profit and speed with their methods?

It would be insulting, they'd get screamed out of the kitchen. Now imagine they're telling those chefs they're going to enforce those methods on them regardless whether they like it or not.


A sign of the inevitible implosion !

It was always intended to be used that way, the programmatic advertising industry is a product of US Nat Sec.

All Law enforcement and Nat Sec of the United States is inherently unethical, or at minimum tied to ethically questionabke tactics. We have the highest incarceration rates in the world, death penalties ect. Our Military isnt exactly ethical in its missions, pretty much since WW2

You're basically saying "There isnt anything inherently wrong about working for the 4th Reich"


This is a childishly simplistic view of the world

What complexity is it you'd like to add?

For instance, the local cops checking in on grandma, or those checking in on a troubled child are really not the bad guys. You WANT them when you need them.

Not all LEOs are brown shirts, In my experience, few are, but they give the lot a bad rap.

Treating LEOs uniformly as evil is just counterproductive


Yes but I don't have a definitive map of who are the good ones, so we must treat it as a life or death situation and suitably defend ourselves in an interaction with any of them.

Why would I want cops doing that instead of social workers or teachers doing it?

No one becomes a cop because they want to be nice and help vulnerable people. Some might say they did but that is some coping technique. Being a cop involves exerting violence towards people who are vulnerable and desperate, and to become one you have to be fine with this. Some would say that this alone is enough to deem a person ethically dubious.

Even if one would accept the premise that society requires some degree of organised violence towards its members, one would also have to handle the question of accountability. Reasonably this violence should be accountable in relation to the victims of it, and police institutions inherently are not.

I think that we should also note that the other person above used "childishly" to denote something negative, apparently they don't think of kids as the light of the world and childish as something fun and inspiring. This is something that makes me quite suspicious of their morals.


Maybe you and I have vastly different experience with police. Disclaimer: From a rather small US state.

Your other note is also well taken, it does however not imply that anything a kid or teen does is OK or automatically positive.

Finally, it's OK to be suspicious. I am too. What I am saying is that one cannot just make the decision "all cops are evil or must be treated as such" and then hope for a good outcome in all cases. I argue it's a better policy to keep an open mind and decide on a case by case basis.


No, I’m not ‘basically’ saying that. Stop putting words in my mouth.

Yeah, also if a SaaS costs, 10k a year, I promise its not not more cost effecient to pull your 10k a month engineer off their usual work to build and then maintain some vibe coded slope everytime an edge case occurs.

Also many customers of SaaS have little to zero engineering staff, they are in construction, resturaunts, law offices ect. These takes are so assanine.


Is there a place on the internet where folks like yourself, who seemingly have a way to think economically congregate? I personally dont know of one, for which if I did, I wouldnt visit here anymore.

So many takes on here are so lazy and simpleton that when you go a few levels deeper all the flaws get exposed.


Its called PolyMarket

... where they come with their own idiosyncratic mental/moral flaws

you gotta treat communities like newspapers - acknowledge their bias and diversify


Its funny you mention this, I was going to say that the only communities I've found online where +EV thinking is the norm is in professional gambling circles. However, those exist basically only in closed off discord and telegram chats, where we are actively manipulating polymarket/kalshi markets && comment sections :)

Also interesting.

Even in companies that have SWE, do you really want to divert in-house SWE time to something as exciting as ... accounting rules and making sure your inventory is auditable? Or any number of the weird compliance things associated with most B2B software for a medium-size business?

I refer to it as "Think for me SaaS", and it should be avoided like the plague. Literally, it will give your brain a disease we haven't even named yet.

It's as if I woke up in a world where half of resturaunts worldwide started changing their name to McDonalds and gaslighting all their customers into thinking McDonalds is better than their "from scratch" menu.

Just dont use these agentic tools, they legitimately are weapons who's target is your brain. You can ship just as fast with autocomplete and decent workflows, and you know it.

Its weird, I dont understand why any self respecting dev would support these companies. They are openly hostile about their plans for the software industry (and many other verticles).

I see it as a weapon being used by a sect of the ruling class to diminsh the value of labor. While im not confident they'll be successful, I'm very disappointed in my peers that are cheering them on in that mission. My peers are obviously being tricked by promises of being able join that class, but that's not what's going to happen.

You're going to lose that thinking muscle and therefor the value of your labor is going to be directly correlated to the quantity and quality of tokens you can afford (or be given, loaned!?)

Be wary!!!


Short term thinkers versus long term thinkers. Just look at the end goal of these companies and you'll see why you shouldn't give them anything.

To say it will free people of the boring tasks is so short sighted....


I'm with you. It scares me how quickly some of my peers' critical thinking and architectural understanding have noticeably atrophied over the last year and a half.

I like your website design, but hate llms. Good luck.

This doesn't make sense, earnest money would be in escrow until the title clears. The scammer would never have access to the earnest money, nor would it ever get transferred to them unless the buyer took too long to close, or didn't come up with funds?? Like the title company would almost have to be involved for this to work.

The title is often actually transfered, and it is a mess to clean up.

You could walk into a court house and submit paperwork for filing, that transfers the title - all without any kind of sale or verification. It happens.


Hmm, I guess you technically just need to convince a notary that you're the seller and with virtual closings/ mobile notaries I guess that's probably pretty easy.

But still the scammer would never see the earnest money, unless the buyer backed out outside of an option period for whatever reason. Presumably they wouldn't if the land is cheap, and they've agreed to pay cash and put earnest money down.


The scammer isn't trying to get the earnest money, they are trying to get the full sale price.

Well yes, I assume that too. But the article says they'll pocket the earnest money which makes zero sense. Probably another example of someone incapable of writing an article by themselves and used an LLM.

>"7. If they get farther they’ll pocket the earnest money deposit which would have been significant in my case."


Is there a single case of the scammer getting a single dollar from one of these scams? My suspicion is that there isn't. (Everyone who doesn't know the answer and isn't curious should downvote me.)

The global freight carrier storefronts around me all have notary services. I used them to notarize the documents from my last home sale; they glanced at my ID to the extent that they checked it matched the name on the paperwork, and signed off on it.

Yeah I wonder if this entire "scam" is a scammer's urban legend, where one scammer brags that they successfully executed it and all the rest try it a few times and eventually give up. Sort of like the search for pirate gold.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: