Hacker Newsnew | past | comments | ask | show | jobs | submit | boshalfoshal's commentslogin

Well this is blatantly false, she linked the career page and I know of people that received offers recently.

They have very strong talent from Meta's FAIR/Pytorch teams as well as a lot of strong people from OAI.


I think Teslas valuation is a bit disconnected from fundamentals too, but saying "collapsing" revenue is a bit dramatic.

A 20% QoQ revenue decline, is almost unheard of for the company of this size.

Not longer than 1 year ago, Tesla was still officially projecting 50% YoY growth until 2030.


Keep in mind - this is not reaffirming HN's anti-AGI/extremely long timeline beliefs.

The article explicitly states that he thinks we will have an AI system that "Will be able to do your taxes" by 2028, and a system that could basically replace all white collar work by 2032.

I think an autonomous system that can reliably do your taxes with minimal to no input is already very very good, and 2032 being the benchmark time for being able to replace 90% - all white collar work is pretty much AGI, in my opinion.

Fwiw I think the fundamental problems he describes in the article that are AGI blockers are likely to be solved sooner than we think. Labs are not stupid enough to throw all their eggs and talent into the scaling basket, they are most definitely allocating resources to tackling problems like the ones described in the article, while putting the remaining resources into bottom line production (scale current model capibilities w/o expensive R&D and reduce serving/training cost).


When was OTS written again? That was effectively an expert system that could do your taxes and it was around at least ten years ago. It didn't even need transformers.

No one has a good benchmark for what AGI is. Already LLMs are more capable at most tasks than most random people off the street. I think at this point people keep asking about because they're trying to ask some deeper philosophical question like "when will it be human" but don't want to say that because it sounds silly.


> Already LLMs are more capable at most tasks than most random people off the street.

I cannot imagine having the narrow conceptualization of the universe of human tasks necessary to even be able to say this with a straight face, irrespective of ones qualitative assessment of how well LLMs do the things that they are capable of doing.


Why don't you go out with a laptop and offer people cash prizes. See how well the average person does.

Yeah, I'll ask a carpenter to build a simple crud app.

Then I'll ask the LLM to frame a house.


The first line is just some cope people use to tell themselves they are different.

Someone using AI won't "take" your job, they'll just get more done than you and when the company inevitably fires more people because AI can continue to do more work autonomously, the first people to go will be the people not producing as much (i.e, the people not using AI).

In the limit both groups are getting their jobs taken by AI. Knowing how to use AI is not some special skill.


Imo this is a misunderstanding of what AI companies want AI tools to be and where the industry is heading in the near future. The endgame for many companies is SWE automation, not augmentation.

To expand -

1. Models "reason" and can increasingly generate code given natural language. Its not just fancy autocomplete, its like having an intern - mid level engineer at your beck and call to implement some feature. Natural language is generally sufficient enough when I interact with other engineers, why is it not sufficient for an AI, which (in the limit), approaches an actual human engineer?

2. Business wise, companies will not settle for augmentation. Software companies pay tons of money in headcount, its probably most mid-sized companies top or second line item. The endgame for leadership at these companies is to do more with less. This necessitates automation (in addition to augmenting the remaining roles).

People need to stop thinking of LLMs as "autocomplete on steroids" and actually start thinking of them as a "24/7 junior SWE who doesn't need to eat or sleep and can do small tasks at 90% accuracy with some reasonable spec." Yeah you'll need to edit their code once in a while but they also get better and cost less than an actual person.


This sounds exactly like the late '90s all over again. All the programming jobs were going to be outsourced to other countries and you'd be lucky to make minimum wage.

And then the last 25 years happened.

Now people are predicting the same thing will happen, but with AI.

The problem then, as is now, is not that coding is hard, it's that people don't know what the hell they actually want.


Folks who believe this are going to lose a lot of money fixing broken software and shipping products that piss off their users.


Thought not as much as they ought to, if all their competitors jump off the same cliff to fit in. :/


Good for outsourcing companies. India used Y2K to build its IT sector and lift up its economy, hopefully Africa can do the same fixing AI slop.


> Software companies pay tons of money

Software companies make a single copy and sell it a billion times. The revenue per employee is insane. The largest companies are trying to make the best product in the world and seek out slight advantages.

The cost saving mindset you are describing is found in companies where software isn’t a core part of the business.


That is the ideal for the management types and the AI labs themselves yes. Copilots are just a way to test the market, and gain data and adoption early. I don't think it is much of a secret anymore. We even see benchmarks created (e.g. OpenAI recently did one) that are solely about taking paid work away from programmers and how many "paid tasks" it can actually do. They have a benchmark - that's their target.

As a standard pleb though I can understand why this world; where the people with connections, social standing and capital have an advantage isn't appealing to many on this forum. If anyone can do something - other advantages that aren't as easy to acquire matter more relatively.


I mean duhh? Is there anyone who denies this is what they would want to happen? That's capitalism. They'd also kill all other roles if they could - and there are other very expensive personnel like sales people, marketers, accountants etc. Whether it's going to happen ,when , and by how much is a different matter to what they want though.


Pesimistically, you are right, there will be no new jobs. The entire goal of these companies is to monopolize near 0 marginal cost labor. Another way to read this is that humans are unnecessary for economic progress anymore.

All that I hope for in this case is that governments actually take this seriously and labs/governments/people work together to create better societal systems to handle that. Because as it stands, under capitalism I don't think anyone is going to willingly give up the wealth they made from AI to spread to the populus as UBI. This is necessary in some capitalist system (if we want to maintain that) since its built on consumption and spending.

Though if its truly an "abundance" scenario then I'd imagine it probably wouldn't matter that people don't have jobs since I'd assume everything would be dirt cheap and quality of life would be very high. Though personally I am very cynical when it comes to "agi is magic pixie dust that can solve any problem" takes, and I'd assume in the short term companies will lay off people in swathes since "AI can do your job," but AI will be nowhere close to increasing those laid-off people's quality of life. It'll be a tough few years if we don't actually get transformative AI.


And as it stands, AI is nowhere close to (1) and (2), but is pretty close to making all of (3) redundant.

This could be because most work is actually frivilous (very possible), but its also easy for them to sell those since ostensibly (1) and (2) actually require a lot of out of distribution reasoning, thinking, and real agentic research (which current models probably aren't capable of).

(3) just makes the most money now with the current technology. Curing cancer with LLMs, though altruistic, is more unrealistic and has no clear path to immediate profitability because of that.

These "AGI" companies aren't doing this out of the goodness of their hearts with humanity in mind, its pretty clearly meant to be a "final company standing" type race where everyone at the {winning AI Company} is super rich and powerful in whatever new world paradigm shows up afterwards.


You are thinking about "hard" and "easy" in the wrong frame of mind. What Tesla does is not "easy" either. Their moat is manufacturing and the R&D they've spent on codesigning their HW and SW stack, and their insane supply chain.

Ford does not suddenly have several million cars with 8-9 cameras to tap into for training data, nor does it have the infrastructure/talent to train models with the data it may get. I think you are underselling the Tesla moat.

Its the same reason why there are only 3-4 "frontier" AI labs, and the rest are just playing catchup, despite a lot of LLM improvements being pretty well researched and open in papers.


My take is less tinfoil-hatty than this.

I simply think that the majority of people in AI today are scifi nerds who want to live out these fantisies and want to be part of something much larger than they are.

Theres also the obvious incentive from AI companies that automating everything is extremely lucrative (i.e, they stand to gain lots of money/power from the hype and in the event that AGI is real).


Imo it probably (unfortunately) doesn't matter if many people are unemployed from an economic perspective.

Human demand for things is basically unlimited, and capital will ostensibly be concentrated within the upper class who reaped all the benefits of ai automation. Monetary velocity within that upper strata of society will probably be only slightly lower than what we have now overall, the rich would just be paying for more things and more expensive things. If I had to guess there would still be plenty of economic activity, just amongst the people who actually have money to transact.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: