It's not quite the same category as magic spells. Kurzweil's prediction has been for 2029 for the last thirty years or so based on Moore's law type stuff.
NVIDIA chips are more versatile. During training, you might need to schedule things to the SFU(Special Function unit that does sin, cos, 1/sqrt(x), etc), you might need to run epilogues, save intermediary computations, save gradients, etc. When you train, you might need to collect data from various GPUs, so you need to support interconnects, remote SMEM writing, etc.
Once you have trained, you have frozen weights/feed-forward networks that consist out of frozen weights that you can just program in and run data over. These weights can be duplicated across any amount of devices and just sit there and run inference with new data.
If this turns out to be the future use-case for NNs(it is today), then Google are better set.
There are still rules on who gets priority on names: toronto.ca is the government but toronto.com is a news organization; ditto for canada.ca and canada.com; ontario.ca versus ontario.com; etc.
This is not the case for LLMs. FP16/BF16 training precision is standard, with FP8 inference very common. But labs are moving to FP8 training and even FP4.
Catch-up in what exactly? Google isn't building hardware to sell, they aren't in the same market.
Also I feel you completely misunderstand that the problem isn't how fast is ONE gpu vs ONE tpu, what matters is the costs for the same output.
Let's put it simply: you can spend 20M to get some level of output with Nvidia hardware or with your own. Your own will give you much higher output and likely smaller operational costs.
It has been purposefully built for your own specific use case.
Given the 100% on things that can only happen to female bodies, I'm surprised there is no counterpart for stuff that can happen only to the male body, like torsion of testicles. Maybe there is no dedicated code for that?
This is extremely good advice. It’s frequently at odds with convenience, since things like iPhone screen mirroring, iPad as a 2nd display, etc. practically beg for you to sign into your work laptop with your personal Apple ID. All I can say is resist the urge to do such things.
I do keep a browser profile on the work machine signed into personal accounts, and do everything personal only in that browser, but even that is probably a mistake. Screen-sharing to home would be a much better compromise, unfortunately my gigabit internet comes with like 50Mbps upload so screen sharing sucks.
As the article goes into, Cherry had their lunch eaten because they barely made any attempt to innovate for decades, and the lifeline offered by their patents ran out. They were doomed regardless of where they were based.
> This begs the question of which side agents will achieve human-level skill at first.
I don't agree; it's perfectly possible, given chasing0entropy's... let's say 'feature request', that either side might gain that skill level first.
> It wouldn’t surprise me if doing the work itself end-to-end (to a market-ready standard) remains in the uncanny valley for quite some time, while “fuzzier” roles like management can be more readily replaced.
Agreed - and for many of us, that's exactly what seems to be happening. My agent is vaguely closer to the role that a good manager has played for me in the past than it is to the role I myself have played - it keeps better TODO lists than I can, that's for sure. :-)
> It’s like how we all once thought blue collar work would be first, but it turned out that knowledge work is much easier. Right now everyone imagines managers replacing their employees with AI, but we might have the order reversed.
Someone else stated this implicitly, but with your reasoning no complex system is ever consistent with ongoing changes. From the perspective of one of many concurrent writers outside of the database there’s no consistency they observe. Within the database there could be pending writes in flight that haven’t been persisted yet.
That’s why these consistency models are defined from the perspective of “if you did no more writes after write X, what happens”.
Happy Thanksgiving @dang, @tomhow, and the HN community! Almost 17 years here, and it's hard to overstate how much I learned from y'all.
Through every technology cycle, heated debate, and inevitable fad, the limitless curiosity of this community remains inspiring. Thank you mods and YC for staying true to the original hacker ethos.
Happy Thanksgiving everyone -- I've mostly been a lurker here over the last 20 years and I'm thankful for being able to interact with such a bright and vibrant community full of thinkers, doers and explorers -- you guys definitely changed my life for the better and inspired me in many, many ways.
A typical user still pays the same on average in a market.
Just they might pay more in some hours and less in others.
Some market systems have gotten bad press over huge bills (eg. Texas), but that only happens when only a small chunk of users participate in the market, whilst others are on fixed pricing and therefore don't care about usage.
When everyone participates, supply and demand make sure the price never goes super high, simply because there are enough people who will turn off stuff to save money.