Hacker Newsnew | past | comments | ask | show | jobs | submit | alex43578's commentslogin

Brent crude is $71 a barrel, but you can't buy gas for $1.69. Why's that?

Are you able to buy crude by the gallon and refine it yourself? That would bring the cost down a bit.

You can buy oil on the market and take delivery, yes. Figuring out the logistics (tanker trucks, rail cars, storage tanks etc) will also be your problem though.

Commodities markets aren’t like eBay, nobody is just going to FedEx you a barrel of oil.


> nobody is just going to FedEx you a barrel of oil.

I'm sure someone will if you pay enough. But you certainly won't be getting the market standard bulk price.


It was absurd to think this was valid policy in the first place. The IEEPA clearly didn’t delegate unilateral tariff authority to the president, especially on the flimsy basis of a “trade emergency”.

If Trump wanted a durable trade policy, work with the legislative majority to pass a real policy with deliberation - just like they should have done with immigration.


No, you do have to choose because money for education (or anything) isn’t unlimited.

There’s a real question of how many resources and what kind of ROI you’d get from trying to educate that bottom 20% to the same level.

I saw this play out when I was in school: profoundly intellectually disabled students getting 1:1 or even 2:1 teaching, trying to get an 18 year old to be able to read 3 letter words, while AP classes were bloated to 30+ students.


> No, you do have to choose because money for education (or anything) isn’t unlimited.

The US is the richest nation on Earth. It can easily afford to educate its people. If you really think we'd need to find new sources of tax dollars to fund that, I have a whole lot of suggestions for where to start and I'd bet that you can easily think of a few low hanging fruit yourself.

> There’s a real question of how many resources and what kind of ROI you’d get from trying to educate that bottom 20% to the same level.

The ROI is massive. As I've said elsewhere, uneducated children become uneducated adults. Adults who vote. Adults who, if they lack the education needed to live successful lives, end up costing society in many ways over far more years than they spend in school.

I don't know about you, but I want to live and work with people who are educated and literate. If I were looking to move to another country for work, I'd want to move somewhere where the people were educated and literate. Especially if those people were going to be my boss, or my neighbor, or handling my food, or in charge of my visa application. Having a well educated population is pure win. The cost of ignorance and a lack of the kinds of skills a good school teaches is staggering.


The US already spends significantly more (both in absolute terms, and as a percent of GDP) than other developed countries, but with worse outcomes (particularly for non-white, non-Asian students).

The question is whether anyone actually expects the outcomes to change if we throw even more money at the problem, or if it'll just get gobbled up by teacher's unions, administration, and silly things like non-phonic instruction or DEI programs.


We are in a record amount of debt and we are about to go to war again. That’s not including the fact that we have a shortage of teachers who are underpaid. As for “new” sources of taxation, increasing the burden on the middle class is yet another way the bottom 20% eats up 80% of the resources. Tax the rich? Unfortunately, if you tax them high enough, they will just leave. They haven’t been patriotic since the last century.

> We are in a record amount of debt and we are about to go to war again.

Isn't it funny how nobody ever worries about how much that's going to cost, no matter how unnecessary there's never any effort to make sure that our warmongering is funded before burdening taxpayers with it. Seems like a ripe target for some tax savings.


People did worry about the cost pre-Biden because they were unnecessary. Unfortunately, for everyone both Putin and Xi exist. Even if you put your head into the ground, it’s not going to change their intentions or behavior. Only missiles and drones will. Your comment is over a decade out of date.

There’s also the fact that a huge portion of foreign immigrants to the US don’t and won’t learn English, but can still operate just fine (or even have the system cater to them - press 1 for Spanish).

Look at the uproar over requiring commercial drivers to be able to read road signs in English.


The US also did annex large parts of what used to be Mexico in the 19th century, so you don't even technically have so be an immigrant to speak Spanish

Unless you're 126 years old, that excuse doesn't really hold up. Plenty of immigrants came from Italy, Poland, and Russia more recently than your mentioned time, but you don't hear Press 3 for Italian too often.

Well... they weren't immigrants, they were annexed. Why should they speak English?

They didn't have to. But they also shouldn't expect the annexing government or populace to accommodate them.

Their country lost the war, lost the territory, and those that stayed and chose to take American citizenship should've learned English, the (de facto) language of the country they chose to join.


People still speak German in South Tyrol even though it's part of Italy since 1919.

Along Interstate 5 in 1980s-90s Southern California, there were large signs, black-on-white, which showed a pictogram of a family running.

The English text above read "WATCH FOR PEOPLE CROSSING ROAD"

The Spanish text below read "PROHIBIDO"


So Atlas can lift 7x the capacity. Even Digit, the tote-consolidating robot, can do 35lbs.

Unitree's demos are a lot of fun, and the antics of releasing the G1 to the public has certainly captured people's attention, but a "working" robot won't look, act, or develop from the G1 or even H2.


Unitree has plenty other industrial robots. https://www.unitree.com/ -> Click Robots

I wasn't trying to say that Unitree is somehow deficient. I'm sure they could build Atlas if they wanted.

My point was that BD could probably build a robot with the shown acrobatic capabilities, but they choose not to because their goal is to build robots that carry heavy loads for industrial applications.


They also wouldn't be getting any funding for doing such fun demos, even if they wanted to.

I don't need a robot to lift more than 15lbs in order to do all my maid work

Focussing on load capacity is missing the forest for the trees


The point is that a robot with higher load capacity is necessarily less agile. BD's target market is industrial so their robots are necessarily larger and less agile.

The fact that Unitree's robots appear so acrobatic reflects that they are likely on par with BD in terms of capabilities but have a different target market.


LLMs are already way too prepared for "Yes and..." improv, given GPT's ridiculous need to click-bait the end of every conversation.

There’s a middle road where AI replaces half the juniors or entry level roles, the interns and the bottom rung of the org chart.

In marketing, an AI can effortlessly perform basic duties, write email copy, research, etc. Same goes for programming, graphic design, translation, etc.

The results will be looked over by a senior member, but it’s already clear that a role with 3 YOE or less could easily be substituted with an AI. It’ll be more disruptive than spell check, clearly, even if it doesn’t wipe it 50% of the labor market: even 10% would be hugely disruptive.


I think you're really overstating things here. Entry level positions are the tier at which replacement of senior positions happen. They don't do a lot, sure, but they are cheap and easily churnable. This is precisely NOT the place companies focus on for cutbacks or downsizing. AI being acceptable at replacing unskilled labor doesn't mean it WILL replace it. It has to make business sense to implement it.

If they're cheap and churnable, they're also the easiest place to see substitution.

Pre-AI, Company A hired 3 copywriters a year for their marketing team. Post-AI, they hire 1 who manages some prompting and makes some spot-tweaks, saving $80K a year and improving the turnaround time on deliverables.

My original comment isn't saying the company is going to fire the 3 copywriters on staff, but any company looking at hiring entry-level roles for tasks that AI is already very good at would be silly to not adjust their plans accordingly.


I mean you're half right. Companies seek to automate some of their transactional labor and reduce their overall head count, but they also want a pool of low paid labor to rotate when they do layoffs, which are usually focused on the highest paid slices of the labor chain.

There's a couple issue with LLMs. The first is that by structure they make a lot of mistakes and any work they do must be verified, which sometimes takes longer than the actual work itself, and this is especially true in compliance or legal contexts. The second is the cost. If a company has a choice to outsource transactional labor to Asia for $3 an hour or spend millions on AI tokens, they will pick Asia every single time. The first constraint will never be overcome. The second has to be overcome before AI even becomes a relevant choice, and the opposite is actually happening. $ per kwh is not scaling like expected.

My prediction is that LLMs will replace some entry level positions where it makes sense, but the vast majority of the labor pool will not be affected. Rather, AI might become a tool for humans to use in certain specific contexts.


Not really though:

1. Companies like savings but they’re not dumb enough to just wipe out junior roles and shoot themselves in the foot for future generations of company leaders. Business leaders have been vocal on this point and saying it’s terrible thinking.

2. In the US and Europe the work most ripe for automation and AI was long since “offshored” to places like India. If AI does have an impact it will wipe out the India tech and BPO sector before it starts to have a major impact on roles in the US and Europe.


1) Companies are dumb enough to shoot themselves in the foot over a single quarter's financials - they certainly aren't thinking about where their middle management is going to come from in 5 or 10 years.

2) There's plenty of work ripe for automation that's currently being done by recent US grads. I don't doubt offshored roles will also be affected, but there's nothing special about the average entry-level candidate from a state school that'll make them immune to the same trends.


To think companies worry about protecting the talent supply chain is to put your fingers in your ears and ignore your eyes for the past 5-10 years. We were already in a crisis of seniority where every single role was “senior only” and AI is only going to increase that.

I actually think the opposite will happen. Suddenly, smart AI-enabled juniors can easily match the productivity of traditional (or conscientious) seniors, so why hire seniors at all?

If you are an exec, you can now fire most of your expensive seniors and replace them with kids, for immediate cash savings. Yeah, the quality of your product might suffer a bit, bugs will increase, but bugs don't show up on the balance sheet and it will be next year's problem anyway, when you'll have already gone to another company after boasting huge savings for 3 quarters in a row.


> Suddenly, smart AI-enabled juniors can easily match the productivity of traditional (or conscientious) seniors, so why hire seniors at all?

I guess we'll see, but so far the flattening curve of LLM capabilities suggest otherwise. They are still very effective with simpler tasks, but they can't crack the hardest problems like a senior developer does.


1. Sure they will! It's a prisoner's dilemma. Each individual company is incentivized to minimize labor costs. Who wants to be the company who pays extra for humans in junior roles and then gets that talent poached away?

2 Yes, absolutely.


The cost of juniors have dropped enough where it's viable now.

You can get decent grads from good schools for $65k.


As far as 1 goes, how do you explain American deindustrilization and e. g. its auto industry.

Completely different architectures and mechanisms. Machine learning draws inspiration from some biology concepts, but implements it in different way.

Judgement-based problems are still tough - LLM as a judge might just bake those earlier model’s biases even deeper. Imagine if ChatGPT judged photos: anything yellow would win.

Agreed. Still tough, but my point was that we're starting to see that combining methods works. The models are now good enough to create rubrics for judgement stuff. Once you have rubrics you have better judgements. The models are also better at taking pages / chapters from books and "judging" based on those (think logic books, etc). The key is that capabilities become additive, and once you unlock something, you can chain that with other stuff that was tried before. That's why test time + longer context -> IMO improvements on stuff like theorem proving. You get to explore more, combine ideas and verify at the end. Something that was very hard before (i.e. very sparse rewards) becomes tractable.

Quants will push it below 256GB without completely lobotomizing it.

> without completely lobotomizing it

The question in case of quants is: will they lobotomize it beyond the point where it would be better to switch to a smaller model like GPT-OSS 120B that comes prequantized to ~60GB.


In general, quantizing down to 6 bits gives no measurable loss in performance. Down to 4 bits gives small measurable loss in performance. It starts dropping faster at 3 bits, and at 1 bit it can fall below the performance of the next smaller model in the family (where families tend to have model sizes at factors of 4 in number of parameters)

So in the same family, you can generally quantize all the way down to 2 bits before you want to drop down to the next smaller model size.

Between families, there will obviously be more variation. You really need to have evals specific to your use case if you want to compare them, as there can be quite different performance on different types of problems between model families, and because of optimizing for benchmakrs it's really helpful to have your own to really test it out.


Did you run say SWE Bench Verified? Where does this claim coming from? It's just an urban legend.

> In general, quantizing down to 6 bits gives no measurable loss in performance.

...this can't be literally true or no one (including e.g. OpenAI) would use > 6 bits, right?


NVIDIA is showing training at 4 bits (NVPF4), and 4 bit quants have been standard for running LLMs at home for quite a while because performance was good enough.

I mean, GPT-OSS is delivered as a 4 bit model; and apparently they even trained it at 4 bits. Many train at 16 bits because it provides improved stability for gradient descent, but there are methods that allow even training at smaller quantizations efficiently.

There was a paper that I had been looking at, that I can't find right now, that demonstrated what I mentioned, it showed only imperceptible changes down to 6 bit quants, then performance decreasing more and more rapidly until it crossed over the next smaller model at 1 bit. But unfortunately, I can't seem to find it again.

There's this article from Unsloth, where they show MMLU scores for quantized Llama 4 models. They are of an 8 bit base model, so not quite the same as comparing to 16 bit models, but you see no reduction in score at 6 bits, while it starts falling after that. https://unsloth.ai/docs/basics/unsloth-dynamic-2.0-ggufs/uns...

Anyhow, like anything in machine learning, if you want to be certain, you probably need to run your own evals. But when researching, I found enough evidence that down to 6 bit quants you really lose very little performance, and even at much smaller quants the number of parameters tends to be more important than the quantization, all the way down to 2 bits, that it acts as a good rule of thumb, and I'll generally grab a 6 to 8 bit quant to save on RAM without really thinking about, and I try out models down to 2 bits if I need to in order to fit them into my system.


This isn't the paper that I was thinking of, but it shows a similar trend to the one I was looking at. In this particular case, even down to 5 bits showed no measurable reduction in performance (actually a slight increase, but that probably just means that you're withing the noise of what this test can distinguish), then you see performance dropping off rapidly as it gets down to 3 various 3 bit quants: https://arxiv.org/pdf/2601.14277

There was another paper that did a similar test, but with several models in a family, and all the way down to 1 bit, and it was only at 1 bit that it crossed over to having worse performance than the next smaller model. But yeah, I'm having a hard time finding that paper again.


So, why does ChatGPT not use fewer bits? Sure they have big data centers but they still have to pay for those.

Why do you think ChatGPT doesn't use a quant? GPT-OSS, which OpenAI released as open weights, uses a 4 bit quant, which is in some ways a sweet spot, it loses a small amount of performance in exchange for a very large reduction in memory usage compared to something like fp16. I think it's perfectly reasonable to expect that ChatGPT also uses the same technique, but we don't know because their SOTA models aren't open.

https://arxiv.org/pdf/2508.10925


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: