Hacker Newsnew | past | comments | ask | show | jobs | submit | nasreddin's commentslogin

"a gameboy emulator should be a weekend project for anyone even ever so slightly qualified" do you really believe something so ridiculous?

The free market has simply decided these consumers are not as relevant as the others.

Maybe the free market is wrong.

It can’t be. Those uses are suboptimal, hence the users aren’t willing to pay the new prices.

Not really. Investors with hundreds of billions of dollars have decided it. The process by which capital has been allocated the way it has isn't some mathematically natural or optimal thing. Our market is far from free.

Saying "investors with hundreds of billions decided it" makes it sound like a few people just chose the outcome, when in reality prices and capital move because millions of consumers, companies, workers, and smaller investors keep making choices every day. Big investors only make money if their decisions match what people actually want; they can't just command success. If they guess wrong, others profit by allocating money better, so having influence isn't the same as having control.

The system isn't mathematically perfect, but that doesn't make it arbitrary. It works through an evolutionary process: bad bets lose money, better ones gain more resources.

Any claim that the outcome is suboptimal only really means something if the claimant can point to a specific alternative that would reliably do better under the same conditions. Otherwise critics are mostly just expressing personal frustration with the outcome.


Its an incorrect assumption, the inference speed and particularly the inference speed of the on-device LLMs with which AVs would need to be using is not compatible with the structural requirements of driving.

I think the assumption is valid. Most of the reasoning components of the next gen (and some current gen) robotics will use VLMs to some extent. Deciding if a temporary construction sign is valid seems to fall under this use case.

But unless you are using a single, end-to-end model for the entire driving stack, that "proceed" command will never influence accelerator pedal.

Sure, there will be a VLM for reading the signs, but the worst it'd be able to output is things like "there is a "detour" sign at (123, 456) pointing to road #987" - and some other, likley non-LLM, mechanism will ensure that following that road is actually safe.


Not a "proceed" command but they can influence the accelerator. I had a dodge ram van that would constantly decelerate on cruise control due to reading road signs. The signs in some states like California for trucks towing trailers are 55 mph but the speed limit would be 65 or 70 mph. The cruise control would detect the sign and suddenly decelerate to 55.

That's an example of things working as expected - the sign recognition system is very limited, in that it can only return road sign information. So it can _ask_ cruise control system to change the speed, but it's up to cruise control to decide if it's safe to obey the request or not. For example, I am pretty sure it'll never raise the speed, no mater what sign recognition system says.

"Transportation, like software, is accumulated knowledge. The horse embodied centuries of breeding, training, and hard-won understanding about terrain, endurance, and failure. People learned from the horses they rode. Travel improved through incremental refinement, generation after generation. The automobile didn’t appear in a vacuum.

Building all your transportation yourself—whether by breeding horses or assembling a Model T—cuts you off from that accumulated experience. You lose the benefits of thousands of hours spent by others thinking carefully about the same problems.

I have no objection to Model Ts for personal use, but I wouldn’t be one-tenth the traveler I am without constant exposure to well-bred horses.

Some worry cars make horses obsolete—who needs breeders if anyone can buy an engine? I’m more optimistic. As cars proliferate, people will value good horses more. A Model T gets you the first 90%; it’s the last 90%—judgment, robustness, and adaptability—that differentiates."


Essentially the idea of a context window in modern LLM models, there is implicit domain knowledge to every task in which no matter how capable the model may be, if not in the context, the software will not be functional.


I think this is the cause for the division in the perception of how useful AI is.

If you work in a field with mostly proprietary implementations of solutions, the top model aren't going to be all that helpful. The models won't have the domain knowledge, because open source code doesn't exist for most domains, because there's very real competitive advantage in keepings code/processes, that aren't trivially implemented, a secret!

I think proprietary data is the new moat, because that's where the vast majority of useful domain knowledge exists.


TSMC is a for profit business. Why would they care about the moral virtue purity of the applications running on their chips? Seriously illogical statement


Oh, they definitely don't. I'm just pointing out that Apple can afford to forfeit the latest nodes without sacrificing anything important, whereas Nvidia cannot.


Apple can't sleep on chip advances either.

Intel seems to be very competitive again when it comes to laptop battery life. If macbooks again get the reputation of sluggy and overheating that's not great for sales.


...using TSMC fabs...

if 18A gets going and is good, I am sure Apple will purchase capacity


For laptops, maybe. For iPhone and iPad, more power is the last thing Apple should be focusing on.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: