Cordless is nice until the batteries won't charge anymore. Or the charger stops working. Or you forgot to charge it and now want to use it. Or the charging connector gets worn and unreliable. Then you have an expensive battery replacement or other repair or (more likely) you just replace the whole device because it was made to be unrepairable, and now you have several pounds of plastic and e-waste to dispose of.
Dealing with plugging a cord into an outlet is no more burdensome than picking up the socks or shoes before the Roomba wakes up and tries to ingest them.
If the batteries don't work anymore, I buy a new vaccum. My Dyson was last updated in 2020, it is 2025 now, so I think it is working out? The charging dock works great for not forgetting to replace it.
I guess this is how people felt when they moved from wired phones to wireless phones?
> Dealing with plugging a cord into an outlet is no more burdensome than picking up the socks or shoes before the Roomba wakes up and tries to ingest them.
And dragging the cord around, and having to plug out and re-plug the cord in again because you want to do a different part of the room.
I'm also a Miele canister vacuum owner, and everywhere in my house where I vacuum is within range of a wall outlet. When I'm done, the cord retracts into the vacuum so I don't need to wind it or stow it myself. I guess, for me, that takes care of the issue to a great enough extent that I just never saw an advantage that justified the expense?
If you are ok with it, I think that's fine. Cordless to me is a huge productivity boost since I can just pick it up and vacuum whenever. I think most people see it as a huge win, but I haven't conducted a formal poll or anything.
Having a robot do everything is just another step in the convenience direction. It is great if you have expensive floors that you want to maintain on a daily or bi-daily basis.
In practice, yes, though they wouldn't think of it that way because that's the kind of people they surround themselves with, so it's what they think human interaction is actually like.
"I want a chat bot that's just as reliable at Steve! Sure he doesn't get it right all the time and he cost us the Black+Decker contract, but he's so confident!"
You're right! This is exactly what an executive wants to base the future of their business off of!
You use unfalsifiable logic. And you seem to argue that, given the choice, CEOs would prefer not to maximize revenue in favor of... what, affection for an imaginary intern?
You are declaring your imagined logic as fact. Since I do not agree with the basis upon which you pin your argument on, there is no further point in discussion.
You're rather dramatically demonstrating how remarkable the progress has been: GPT-3 was horrible at coding. Claude Opus 4.5 is good at it.
They're already far faster than anybody on HN could ever be. Whether it takes another five years or ten, in that span of time nobody on HN will be able to keep up with the top tier models. It's not irrational, it's guaranteed. The progress has been extraordinary and obvious, the direction is certain, the outcome is certain. All that is left is to debate whether it's a couple of years or closer to a decade.
They continue to improve significantly year over year. There's no reason to think we're near a plateau in this specific regard.
The bottom 50% of software jobs in the US are worth somewhere around $200-$300 billion per year (salary + benefits + recruiting + training/education), one trillion dollars every five years minimum. That's the opportunity. It's beyond gigantic. They will keep pursuing the elimination of those jobs until it's done. It won't take long from where we're at now, it's a 3-10 year debate, rather than a 10-20 year debate. And that's just the bottom 50%, the next quarter group above that will also be eliminated over time.
$115k + $8-12k healthcare + stock + routine operating costs + training + recruitment. That's the ballpark median two years ago. Surveys vary, from BLS to industry, two to four million software developers, software engineers, so on and so forth. Now eliminate most of them.
Your AI coding agent circa 2030 will work 24/7. It has a superior context to human developers. It never becomes emotional or angry or crazy. It never complains about being tired. It never quits due to working conditions. It never unionizes. It never leaves work. It never gets cancer or heart disease. It's not obese, it doesn't have diabetes. It doesn't need work perks. It doesn't need time off for vacations. It doesn't need bathrooms. It doesn't need to fit in or socialize. It has no cultural match concerns. It doesn't have children. It doesn't have a mortgage. It doesn't hate its bosses. It doesn't need to commute. It gets better over time. It only exists to work. It is the ultimate coding monkey. Goodbye human.
Amazing how much investment has mostly gone to eliminate one job category; ironically what was meant to be the job of the future "learn to code". To be honest on current trajectory I'm always amazed how many SWE's think it is "enabling" or will be anything else other than this in the long term. I personally don't recommend anyone into this field anymore, especially when big money sees this as the next disruption to invest in and has bet in the opposite direction investment/market wise. Amazing what was just a chatbot 3 years ago will do to a large amount of people w.r.t unemployment and potential poverty; didn't appreciate it at the time.
Life/fate does have a sense of irony it seems. I wouldn't be surprised if it is just the "creative" industries that die; and normal jobs that provide little value today still survive in some form - they weren't judged on value delivered and still existed after all.
Doing what?
What would we need software for when we have sufficiently good AI?
AI would become "The Final Software", just give it input data, tell it what of data transform you want and it will give you the output, no need for new software ever again.
People claimed GPT-3 was great at coding when it launched. Those who said otherwise were dismissed. That has continued to be the case in every generation.
> People claimed GPT-3 was great at coding when it launched.
Ok and they were wrong, but now people are right that it is great at coding.
> That has continued to be the case in every generation.
If something gets better over time, it is definitionally true that it was bad for every case in the past until it becomes good. But then it is good.
Thats how that works. For everything. You are talking in tautologies while not understanding the implication of your arguments and how it applies to very general things like "A thing that improves over time".
For brand new projects? Perhaps. For working with existing projects in large code bases? Still not living up to the hype. Still sick of explaining to leadership that they're not magic and "agentic" isn't magic either. Still sick of everyone not realizing that if you made coding 300% faster (which AI hasn't) that doesn't help when coding is less than half the hours of my week. Still sick of the "productivity gains" being subsidized by burning out competent code reviewers calling bullshit on things that don't work or will cause problems down the road.
Western economic history is 75% of businesses failing in the first 15 years, and the market still growing because the last 25% has outsized rewards.
More pertinently, we have a long history of people buying into bubbles only for them to crash hard, no matter how often people tell them "past performance is not a guarantee of future growth" or whatever the legally mandated phrase is for the supply of investment opportunities to the public where you live.
Sometimes the bubbles do useful things before they burst, like the railways. Sometimes the response to the burst creates a bunch of social safety nets, sometimes it leads to wars, sometimes both (e.g. Great Depression).
> Still the fact that LLMs can do well in things like the maths olympiad have me thinking there must be some way to tweak this to be more brain like
That's because you, as you admit in the next sentence, have almost no understanding of how they work.
Your reasoning is on the same level as someone in the 1950s thinking ubiquitous flying cars are just a few years away. Or fusion power, for that matter.
In your defense, that seems to be about the average level of engagement with this technology, even on this website.
Maybe but the flying cars and fusion ran into fundamental barriers of the physics being hard. With human level intelligence though we have evidence it's possible from our brains which seem to use less compute than the LLMs going by power usage so I don't see a fundamental barrier to it just needing some different code.
You could say there is no fundamental barrier to humans doing anything that is allowed by the laws of Physics, but that is not a very useful statement, and doesn't indicate how long it may take.
Since nobody has yet figured out how to build an artificial brain, having that as a proof it's possible doesn't much help. It will be decades or more before we figure out how the brain works and are able to copy that, although no doubt people will attempt to build animal intelligence before fully knowing how nature did it.
Saying that AGI "just needs some different code" than an LLM is like saying that building an interstellar spaceship "just needs some different parts than a wheelbarrow". Both are true, and both are useless statements offering zero insight into the timeline involved.
Neither did the people expecting fusion power and flying cars to come quickly.
We have just as much evidence that fusion power is possible as we do that human level intelligence is possible. Same with small vehicle flight for that matter.