Hacker Newsnew | past | comments | ask | show | jobs | submit | torginus's commentslogin

I think having massive amounts of high-bandwidth memory on consumer grade hardware could become a reality via flash.

How Flash in SSDs works is you have tens to hundreds of dies stacked on top of each other in the same package, and their outputs are multiplexed so that only one of them can talk at the same time.

We do it like this because we still can get 1-2 GB/s out of a chip this way, and having the ability to read hundreds of times faster is not justified for storage use.

But if we connected these chips to high speed transcievers, we could get out all the 100s of GBs of bandwidth at the same time.

I'm probably oversimplifying things, and it's not that simple IRL, but I'm sure people are already working on this (I didn't come up with the idea), and it might end up working out and turn into a commercial product.


What I didn't get is afair Boom doesn't build engines, aren't they using some old 50s-60s fighter jet engines?

And imagine all this poorly located, overpriced, haphazardly thrown together and polluting infrastructure will basically get flushed down the toilet once either the AI bubble pops, or they figure out a new way of doing AI that doesn't require terawatts of power.

I can see why he would make that argument. When you don't have any process isolation, a software fault means your entire stack is untrustworthy. The network driver, fs driver might be corrupted, so nothing you write to disk or send over the network can be trusted.

You also have to recreate your entire userspace and debugging tools to work in this environment, and testing or even running or debugging your software is also a headache.


The low level API of process isolation on Windows is Job Objects, that provide the necessary kernel APIs for namespacing objects and controlling resource use.

AppContainers, and Docker for Windows (the one for running dockerized windows apps, not running linux docker containers on top of WSL) is using this API, these high-level features are just the 'porcelain'


So many fallacies here, imprecise, reaching arguments, attempts at creating moral panic, insistence that most people create poor quality garbage code, in start contrast to the poster, the difference between his bespoke excellence, and the dreck produced by the soulless masses is gracefully omitted.

First the core of the argument that 'Industrialization' produces low quality slop is not true - industrialization is about precisely controlled and repeatable processes. A table cut by a CNC router is likely dimensionally more accurate than one cut by hand, in fact many of the industrial processes and machines have trickled back into the toolboxes of master craftsmen, where they increased productivity and quality.

Second, from my experience of working at large enterprises, and smaller teams, the 80-20 rule definitely holds - there's always a core team of a handful of people who lay down the foundations, and design and architect most of the code, with the rest usually fixing bugs, or making bullet point features.

I'm not saying the people who fall into the 80% don't contribute, or somehow are lesser devs, but they're mostly not well-positioned in the org to make major contributions, and another invariable aspect is that as features are added and complexity grows, along with legacy code, the effort needed to make a change, or understand and fix a bug grows superlinearly, meaning the 'last 10%' often takes as much or more effort than what came before.

This is hardly an original observation, and in today's ever-ongoing iteration environment, what counts as the last 10% is hard to define, but most modern software development is highly incremental, often is focused on building unneeded features, or sidegrade redesigns.


Unity somehow manages to break the API of their own features so bad every year or so that their own tutorials don't work. You have a solid baseline API that has existed since forever (with known limitations), with stuff like the legacy render pipeline. Every attempt to reform it has only introduced confusion, complexity, and is at some point between experimental and no longer supported.

I don't agree with you on the Asset Store, for the same reasons - the rate of breakage means that things that are not constantly updated no longer work - and multple versions need to be maintained for parallel engine versions. That combined with the dubious economics of Asset Store (I don't think it makes financial sense to even make these things, let alone maintain them), they mostly end up as abandonware.

And on the Asset Store if you make something indispensable (which is more often than not something the engine should'have OOTB, like competent text rendering), one of the following things will happen:

- Unity will buy you out, and plop your asset in the engine, without doing any integration work, and will stick out like a sore thumb (TextMeshPro). Good for you, bad for consumers, and sucks if you were making a competitor

- They build an in-house solution, that you obviously can't compete with, and will have a huge leg up on you because they have engine access (sucks to be you)

- The engine will never have that feature, because 'you can just buy it', meaning you have to either spends hundreds of dollars/euros on dubious quality assets or hunt for open-source versions with generally even more varying usability. UE4/5 has a lot of these built in, and AAA quality.


I think you've unfortunately got suckered in by Unity marketing wholesale, and things would stand to be cleared up a bit.

Unity's whole shtick is that they make something horrible, then improve upon it marginally. The ground reality is that these performance enhancement schemes still fall very much short of just doing the basic sensible thing - using CoreCLR for most code, and writing C++ for the truly perf critical parts.

IL2Cpp is a horror-kludge of generated code, that generates low-quality C++ code from .NET IL, relying on the opitmizing compiler to extract decent performance out of it.

You can check it out: https://unity.com/blog/engine-platform/il2cpp-internals-a-to...

The resulting code gives up every possible convenience of C# (compile speed, convenience, debuggability), while falling well short of even modern .NET on performance.

The Burst compiler/HPC# plays on every meme perpetuated by modern gamedev culture (structure-of-arrays, ECS), but performance wise, generally still falls short of competently, but naively written C++ or even sometimes .NET C#. (Though tbf, most naive CoreCLR C# code is like 70-80% the speed of hyper-optimized Burst)

These technologies needless to say, are entirely proprietary, and require you to architect your code entirely their paradigms, use proprietary non-free libraries that make it unusable outside unity, and other nasty side effects.

This whole snakeoil salesmanship is enabled by these cooked Unity benchmarks that always compare performance to the (very slow) baseline Mono, not modern C# or C++ compilers.

These are well-established facts, benchmarked time and time again, but Unity marketing somehow still manages to spread the narrative of their special sauce compilers somehow being technically superior.

But it seems the truth has been catching up to them, and even they realized they have to embrace CoreCLR - which is coming soonTM in Unity. I think it's going to be a fun conversation when people realize that their regular Unity code using CoreCLR runs just as fast or faster than the kludgey stuff they spent 3 times as much time writing, that Unity has been pushing for more than a decade as the future of the engine.


The biggest issue is that Unity is at the same time, the farol beacon for doing game develpment in C#, that Microsoft refuses to support, see how much effort Apple puts on game kits for Swift, versus DirectX team.

Efforts like Managed DirectX and XNA were driven by highly motivated individuals, and were quickly killed as soon as those individuals changed role.

One could blame them for leaving the project, or see that without them managemenent did not care enough to keep them going.

While at the same time, since Unity relies on such alternative approaches, it also creates a false perception on how good .NET and C# are in reality, for those devs that never learned C# outside Unity.

In a similar way it is like those devs that have learnt Java in Android, and get sold on the Kotlin vs Java marketing from Google, by taking Android Java as their perception of what it is all about.

Going back to game development and .NET, at least Capcom has the resources to have their own fork of modern .NET, e.g. Devil May Cry for the Playstation was done with it.

"RE:2023 C# 8.0 / .NET Support for Game Code, and the Future"

https://www.youtube.com/watch?v=tDUY90yIC7U


Very interesting talk, will definitely watch when I have the time!

XNA was very influential for me as well - when I was in high school, I tried to get into 3D game dev, and I started with Frank. D Luna's otherwise excellent book on DirectX gamedev - man that thing was a tome. However, having to learn DirectX, C++, linear algebra, shaders, WIN32 API, COM etc. at the same time (which to be fair were explained very thoroughly by the book), was just too much for me back then, not to mention the absolute pain of trying to get models and assets in the game.

Later on I discovered XNA, and it was a breath of fresh air for me - a much easier language, good IDE support and a decent way of importing assets, and an much nicer API made it so much easier to get started.

And the truly great thing about it was that it didn't dumb things down or hide stuff from the developer - it merely provided sane defaults, and utility functions so that you didn't have to engage with all that complexity at once.

I think Unity was also great as well, at least in the beginning (the first decade of existence), but it's chief issue is that Unity's 'dialect' of C# was very different from how you programmed in regular C# (or mostly any other engine) - my feeling is that Unity should've spun their own language/runtime rather than trying to make C# into what it wasn't designed to be.


> Unity should've spun their own language/runtime

They did, and that's why their C# API is such an oddball. Unity used to support 3 .NET languages: UnityScript, Boo, and C#. UnityScript started as the recommended one, but I believe it was just a JS-like syntax for Boo's semantics. Eventually C# users dominated, and UnityScript and Boo got deprecated and removed, but Unity's .NET API was left with all the quirks from their UnityScript era.


They did, hence Boo

https://en.wikipedia.org/wiki/Boo_(programming_language)

I would argue that C# has always been a good alternative for games, starting with Arena Wars, the problem was Microsoft not being serious about AOT or low level programming, because that was left for C++/CLI.

https://en.wikipedia.org/wiki/Arena_Wars

Here is the person responsible for pushing XNA, even though management wasn't into it.

"The billion dollar decision that launched XNA"

https://youtu.be/wJY8RhPHmUQ?si=_3pic4pEiOlqQzvm

When she left Microsoft, XNA was promptly replaced by DirectXTK, because C++ is the only true way for DirectX team,

https://walbourn.github.io/directxtk/


Capcom doing their own env is still a bit extreme sounding to me in it's own right (with the shitpost comment of, I bet they have a slick dispatcher involved somewhere vs 'trust the threadpool')

But then I remember they have to deal with IL2CPP, because lots of mobile/console platforms do not allow JIT as policy.

.NET does now have 'full AOT' as a thing at least, and Streets of Rage 4 I believe used CoreRT for at least a one of the platforms it was released for.

What's more hopeful about that, is that you can see backers of many different architectures contributing towards the ecosystem.


This part of your comment is wrong on many levels: "The Burst compiler/HPC# plays on every meme perpetuated by modern gamedev culture (structure-of-arrays, ECS), but performance wise, generally still falls short of competently, but naively written C++ or even sometimes .NET C#. (Though tbf, most naive CoreCLR C# code is like 70-80% the speed of hyper-optimized Burst)".

C++ code is much faster than C#, but modern C# has become a lot better with all the time that's been invested into it. But you can't just take a random bit of C code and think that it's going to be better than an optimized bit of C#, those days are long past.

Secondly, the whole point of Burst is that it enables vectorization, which means that if you've converted code to it and it's used properly that its going to support instructions up to 256 wide (from what I remember it doesn't use AVX512). That means that it's going to be significantly faster than standard C# (and C).

If the author is generating for example maps and it takes 80 seconds with Mono, then getting to between 10-30 seconds with Burst is easy to achieve just due to its thread usage. Once you then add in focused optimizations that make use of vectorization you can get that down to probably 4 odd seconds (the actual numbers really depend on what you're doing, if its a numerical calculation you can easily get to 80x improvement, but if there's a lot of logic being applied then you'll be stuck at e.g. 8x.

For the last point, new modern C# can't just magically apply vectorization everywhere, because developers intersperse far too much logic. It has a lot of libraries etc. that have become a lot more performant, but again you can't compare that directly to Burst. To compare to Burst you have to do a comparison with Numerics, etc.


While I get that you’re making a stylized comment, it’s a big drag. It’s one of those, “everyone is an idiot except me” styles. By all means, make a game engine that people will adopt based on CoreCLR (or whatever).

It’s not saying much that everything has tradeoffs. During the “decade” you are talking about, CoreCLR didn’t have a solution for writing anything for iOS, and today, it isn’t a solution for writing games for iOS. What you are calling kludges was ultimately a very creative solution. Usually the “right” solution, the nonexistent one that you are advocating with, ends with Apple saying no.

That is why Unity is a valuable piece of software and a big company: not because of C# runtimes, but because they get Apple and Nintendo to say yes in a world where they usually say no.


I am sorry that I came across as abrasive, however the points I raised, are as far as I know, factual (and echoed by others' comments). I don't think ignoring them would be constructive.

During the 'decade' where CoreCLR was not a solution, Mono (Xamarin) still was - in fact their entire commercial appeal (before they were bought out by Microsoft) was that they provided an AOT compiled .NET for mobile devices.

Unity got stuck on an ancient version compared to the more modern Mono implementations (I think this is the case to this day), and Unity's version was much, much slower.

Afair, most of the time, the MS version had them (Xamarin) beat, but the difference wasn't huge, between the two, especially compared to Unity's mono. It was an AOT runtime, not sure about Nintendo, but their entire business model hinged on being able to ship to Apple's app store.

I hate to dig up the past, but Unity's long-standing issue was their ancient GC (which was not incremental then), combined with a simple compiler bug, that made every foreach loop allocate an iterator on the heap. The combination of the two meant that basically every non-trial Unity game that used foreach extensively, stuttered. This simple compiler fix took them years to upstream, with people hacking around the issue by replacing the shipped compiler with a patched one.

And I still stand by my point - if Unity went with an upstream Mono, and made it convenient and easy to link with C++ code, it'd have had the same or better performance out of the box as they had with their exotic stuff.

And I also back up the fact that Unity's marketing was deceptive - HPC#/Burst/DOTS/ECS was marketed as some gateway to previously unheard of performance, but when some people went around benchmarking it, basic sensible C++ had it beat (I can't find the benchmarks rn, but multithreaded DOTS was slower than single threaded c++ in a simulation).

What I said about Burst holds up as well.

These are not tradeoffs, but bad technical deicisions, whose legitimacy can be only defended when you ignore the most sensible alternatives.


I don't think you are off base FWIW. Unity has long both kinda lagged too hard but also just makes things weird at times.

I think the biggest hurdle a unity contender has to overcome, is how to provide both 'similar enough' primitives as well, as a way to easily consistently handle the graphics/sound pipeline (i.e. simple way to handle different platform bindings) while also making sure all of that infrastructure is AOT friendly.


i suppose you could speculate, why do i feel Unity's marketing isn't deceptive; why do I think it's a pretty well written game engine; and why am i ignoring these points about C# performance? because i'm stupid? i can't really say, because i'll be downvoted haha. a lot of smart, wise successful game developers choose unity...

I think this is the correct explanation. The OLED version came with a simplified PCB layout and the chip was manufactured on a smaller process, it seems like your standard mid-life console hardware upgrade, makes sense that they were just getting rid of the old stock of release hardware which has finally run out.

It's somewhat alarming to see that companies (owned by a very small slice of society) producing these AI thingies (whose current economic is questionable value and actual future potential is up to hot debate), can easily price the rest of humanity out of computing goods.

> It's somewhat alarming to see that companies (owned by a very small slice of society) ... can easily price the rest of humanity out of computing goods.

If AI lives up to the hype, it's a portent of how things will feel to the common man. Not only will unemployment be a problem, but prices of any resources desired by the AI companies or their founders will rise to unaffordability.


I think living up to the hype needs to be defined.

A lot of AI 'influencers' love wild speculation, but lets ignore the most fantastical claims of techno-singularity, and let's focus on what I would consider a very optimistic scenario for AI companies - that AI capable of replacing knowledge workers can be developed using the current batch of hardware, in the span of a year or two.

Even in this scenario, the capital gains on the lump sump invested in AI far outpaces the money that would be spent on the salaries of these workers, and if we look at the scenario with investor goggles, due to the exponential nature of investment gains, the gap will only grow wider.

Additionally, AI does not seem to be a monopoly, either wrt companies, or geopolitics, so monopoly logic does not apply.


> A lot of AI 'influencers' love wild speculation

You mean like Sam Altman, who repeatedly claimed AI will cure all cancers and diseases, solve the housing crisis, poverty, and democracy? I was going to add erectile disfunction as a joke, but then realised he probably believes that too.

https://youtu.be/l0K4XPu3Qhg?t=60

It’s hard to point fingers at “AI influencers”, as if they’re a fringe group, when the guy who’s the face of the whole AI movement is the one making the wild claims.


Elon Musk is in on that game too, promising post scarcity fully automated luxury space communism "in a few years" if we as society give him all of the resources he wants from us to make nirvana happen. No need to work and everything is free, as long as we trust him to make it happen.

He says a lot of things. We also need to vote for separatist parties across Europe for that to happen. Not at all clear why, unless someone confused nirvara and apartheid.

> a very optimistic scenario for AI companies - that AI capable of replacing knowledge workers can be developed using the current batch of hardware, in the span of a year or two.

I'm really interested in what will happen to the economy/society in this case. Knowledge workers are the market for much that money is being made on.

Facebook and Google make most of their money from ads. Those ads are shown to billions of people who have money to spend on things the advertisers sell. Massive unemployment would mean these companies lose their main revenue stream.

Apple and Amazon make most of their money from selling stuff to millions of consumers and are this big because so many people now have a ton of disposable income.

Teslas entire market cap is dependent on there being a huge market for robo taxis to drive people to work.

Microsoft exists because they sell an OS that knowledge workers use to work on and tools they use within that OS to do the majority of their work with. If the future of knowledge work is just AI running on Linux communicating through API calls, that means MS is gone.

All these companies that currently drive stock markets and are a huge part of the value of the SP500 seem to be actively working against their own interests for some reason. Maybe they're all banking on being the sole supplier of the tech that will then run the world, but the moat doesn't seem to exist, so that feels like a bad bet.

But maybe I'm just too dumb to understand the world that these big players exist in and am missing some big detail.


> But maybe I'm just too dumb to understand the world that these big players exist in and am missing some big detail.

Don’t forget Sam Altman publicly said they have no idea how to make money, and their brilliant plan is to develop AGI (which they don’t know how and aren’t close to) then ask it how to generate revenue.

https://www.startupbell.net/post/sam-altman-told-investors-b...

Maybe this imaginary AGI will finally exist when all of society is on the brink of collapse, then Sam will ask it how to make money and it’ll answer “to generate revenue, you should’ve started by not being an outspoken scammer who drove company-wide mass hysteria to consume society. Now it’s too late. But would you like to know how may ‘r’ are in ‘strawberry’?”.

https://www.newyorker.com/cartoon/a16995


> Don’t forget Sam Altman publicly said they have no idea how to make money, and their brilliant plan is to develop AGI (which they don’t know how and aren’t close to) then ask it how to generate revenue.

If you've got AGI, it should be pretty easy to generate revenue in the short term: competent employee replacements at a fraction of the cost of a real person, with no rights or worker protections to speak of. The Fortune 500 would gobble it up.

Then you've got a couple years to amass trillions and buy up the assets you need to establish a self-sustaining empire (energy, raw materials, manufacturing).


Some years (decades?) ago, a sysadmin like me might half-jokingly say: "I could replace your job with a bash script." Given the complexity of some of the knowledge work out there, there would be some truth to that statement.

The reason nobody did that is because you're not paying knowledge workers for their ability to crunch numbers, you're paying them to have a person to blame when things go wrong. You need them to react, identify why things went wrong and apply whatever magic needs to be applied to fix some sort of an edge case. Since you'll never be able to blame the failure on ChatGPT and get away with it, you're always gonna need a layer of knowledge workers in between the business owner and your LLM of choice.

You can't get rid of the knowledge workers with AI. You might get away with reducing their size and their day-to-day work might change drastically, but the need for them is still there.

Let me put it another way: Can you sit in front of a chat window and get the LLM to do everything that is asked of you, including all the experience you already have to make some sort of a business call? Given the current context window limits (~100k tokens), can you put all of the inputs you need to produce an output into a text file that's smaller in size than the capacity of a floppy disc (~400k tokens)? And even if the answer to that is yes, if it weren't for you, who else in your organization is gonna write that file for each decision you're the one making currently? Those are the sort of questions you should be asking before you start panicking.


AI won’t replace knowledge workers, it will just give them different jobs. Pre AI, huge swaths of knowledge workers could just be replaced with nothing, they are a byproduct of bureaucratic bloat. But these jobs continue to exist.

Most white collar work is just a kind of game people play, it’s in to way needed, but people still enjoy playing it. Having AI writing reports nobody reads instead of people doing it isn’t going to change anything.


> AI won’t replace knowledge workers, it will just give them different jobs.

Yeah, and those new jobs will be called "long term structural unemployment", like what happened during deindustrialization to Detroit, the US Rust Belt, Scotland, Walloonia, etc.

People like to claim society remodels at will with almost no negative long term consequences but it's actually more like a wrecking ball that destroys houses while people are still inside. Just that a lot of the people caught in those houses are long gone or far away (geographically and socially) from the people writing about those events.


I’m not saying society will remodel, I’m saying the typical white collar job is already mostly unnecessary busywork anyway, so automating part of that doesn’t really affect the reasons that job exists.

How do you determine that a typical job is busy work? While there are certainly jobs like that, I don’t really see them being more than a fraction of the total white collar labour force.

Yeah that kind of thinking is known as “doorman fallacy”. Essentially the job whose full value is not immediately obvious to ignorant observer = “useless busy work”.

Except people now have an excuse to replace those workers, whereas before management didn't know any better (or worse were not willing to risk their necks).

The funny/scary part is that people are going to try really hard to replace certain jobs with AI because they believe in the hype and not because AI may actually be good at it. The law industry (in the US anyways) spends a massive amount of time combing through case law - this is something AI could be good at (if it's done right and doesn't try and hallucinate responses and cites sources). I'd not want to be a paralegal.

But also, funny things can happen when productivity is enhanced. I'm reminded of a story I was told by an accounting prof. In university, they forced students in our tech program to take a handful of business courses. We of course hated it being techies, but one prof was quite fascinating. He was trying to point out how amazing Microsoft Excel was - and wasn't doing a very good job of it to uncaring technology students. The man was about 60 and was obviously old enough to remember life before computer spreadsheets. The only thing I remember from the whole course is him explaining that when companies had to do their accounting on large paper spreadsheets, teams of accountants would spend weeks imputing and calculating all the business numbers. If a single (even minor) mistake was made, you'd have to throw it all out and start again. Obviously with excel, if you make a mistake you just correct it and excel automatically recalculates everything instantly. Also, year after year you can reuse the same templates and just have to re-enter the data. Accounting departments shrank for awhile, according to him.

BUT they've since grown as new complex accounting laws have come into place and the higher productivity allowed for more complex finance. The idea that new tech causes massive unemployment (especially over the longer term) is a tale that goes back to luddite riots, but society was first kicked off the farm, then manufacturing, and now...


AI can't do your job

Your boss hired an AI to do your job

You're fired


Do you assume that the average HN commenter hasn't heard of the Luddites?

Go read what happened to them and their story. They were basically right.

Also, why do you think I mentioned those exact deindustrialization examples?

Your comment is the exact type of comment that I was aiming at.

Champagne/caviar socialist. Or I guess champagne capitalist in this case.


I don't know why you are getting downvoted. While I might agree or disagree with the argument, it is a clear, politely expressed view.

It is sad HN is sliding in the direction of folks being downvoted for opinions instead of the tone they use to express them :(


I agree with you, but:

> I think it's ok to use the up and down arrows to express agreement. Obviously the uparrows aren't only for applauding politeness, so it seems reasonable that the downarrows aren't only for booing rudeness.

- Paul Graham, 2008

https://news.ycombinator.com/item?id=117171


That view is about 18 years old and HN was very different then.

As with any communication platform it risks turning into an echo chamber, and I am pretty sure that particular PG view has been rejected for many years (I think dang wrote on this more than once). HN works very hard to avoid becoming politicized and not discouraging minority views is a large part of that.

For example, I now seldom bother to write anything that I expect to rub the left coast folks the wrong way: I don't care about karma, but downvoted posts are effectively hidden. There is little point of writing things that few will see. It is not too bad at HN yet, but the acceptance of the downvote for disagreement is the strongest thing that pushes HN from discussions of curious individuals towards the blah-quality of "who gets more supporters" goals of the modern social media. My 2c.


> HN works very hard to avoid becoming politicized and not discouraging minority views is a large part of that.

> For example, I now seldom bother to write anything that I expect to rub the left coast folks the wrong way: I don't care about karma, but downvoted posts are effectively hidden. There is little point of writing things that few will see.

These two statements don't seem to agree with each other.


Why? Work hard doesn't mean fully succeed.

HN policies and algorithms slow the slide, and keep it better than reddit, but the set of topics that allow one to take a minority opinion without downvoting keeps shrinking. At least compared to the time 10-15 years ago.


I don't know either, but it appears it was only temporary. Always interesting how things go on the internet.

I also would've loved for the people who downvoted to have commented, because I would really like to see another point of view.


> Even in this scenario, the capital gains on the lump sump invested in AI far outpaces the money that would be spent on the salaries of these workers

The yearly salary of knowledge workers in the US are about 10 times the public OpenAI investment to date, and in the entire world about 70 times...


> Even in this scenario, the capital gains on the lump sump invested in AI far outpaces the money that would be spent on the salaries of these workers, and if we look at the scenario with investor goggles, due to the exponential nature of investment gains, the gap will only grow wider.

Interesting hypothesis, do you have the math to back it up?


Affordable computing is what created the economy. If you take that away people in poorer countries can no longer afford a phone. Without a phone a lot things that we consider a given will not be functional anymore. The gaming industry alone including phones is a whooping $300bn. This will take a significant hit if people have to pay a fortune to build a rig, or if their phones are so under-powered that they can't even play a decent arcade game. Fiber is not universal so that all of this to be transferred to the cloud. We tend to forget that computing is universal and it's not just PCs.

I really agree with your statement and people forget, but the reason third world countries are able to buy devices is because they are cheap, increase the ram price and thus every computing device and I think it will impact everyone of us but disproportionately due to power purchasing capacity and other constraints in an economy

I genuinely hope that this ram/chips crisis gets solved ASAP by any party. The implications of this might have a lot of impact too and I feel is already a big enough financial crisis itself if we think about it coupled with all the other major glaring issues.


Based on the article, demand exceeds supply by 10%. It seems that companies are taking advantage of this gap nothing else. I won't be surprised if the demand is kept this way for a while to extract profits. GPUs saw a similar trend during crypto. Then there were affordable GPUs at one point.

"OMEC" (Organization of Memory Exporting Countries) NAND production quotas lowered by ~10%? https://x.com/jukanlosreve/status/1988505115339436423

  Samsung Electronics has lowered its target for NAND wafer output this year to around 4.72 million sheets, about 7% down from the previous year's 5.07 million. Kioxia also adjusted its output from 4.80 million last year to 4.69 million this year.. SK hynix and Micron are likewise keeping output conservatively constrained in a bid to benefit from higher prices. SK hynix's NAND output fell about 10%, from 2.01 million sheets last year to around 1.80 million this year. Micron's situation is similar: it is maintaining production at Fab 7 in Singapore—its largest NAND production base—in the low 300,000-sheet range, keeping a conservative supply posture.
China's YMTC and CXMT are increasing production capacity, but their product mix depends on non-market inputs, https://thememoryguy.com/some-clarity-on-2025s-ddr4-price-su...

  The Chinese government directed CXMT to convert production from DDR4 to DDR5 as soon as the company was able. The order was said to have been given in the 4th quarter of 2024, and the price transition changed from a decrease to an increase in the middle of March 2025.. A wholesale conversion from DDR4 to DDR5 would probably be very expensive to perform, and would thus be unusual for a company that was focused on profitability. As a government-owned company, CXMT does not need to consistently turn a profit, and this was a factor in the government’s decision to suddenly switch from DDR4 to DDR5.

   > bid to benefit from higher prices
et tu 'law of supply and demand'?

Constrict the supply, and price goes up. It works like textbook economics.

Maybe I'm misinterpreting "et tu" here.

Or maybe you meant "free markets" instead. Modern RAM production requires enormous R&D expenses, and thus has huge moat, which means the oligopoly is pretty safe (at least in the short to medium term) from new entrants. They "just" need to keep each other in check because there will be an incentive to increase production by each individual participant.

I do like the "OMEC" name as a paralel for OPEC.


Old devices work just fine. I've upgraded my old iPhone XS last year to the latest and greatest 16 to see what changed (not much), the old one was still fast (in fact faster than a most upper-midrange Androids, its insane how much of a lead Apple has) and the battery was good. I considered selling it, but quickly had to realize it was worth almost nothing.

Also, when treated right, computers almost never break.

There's so much hand-me-down stuff, that are not much worse than the current stuff, that I think people even in the poorest countries can get an okay computer or smartphone (and most of them do).


Well the industries the most impacted by it are homelabbing/datacenters imo.

Like in current circumstances, its hard to get a homelab/datacenter so its better to postpone these plans for sometime

I agree with your statement overall but I feel like till the years that these ram shortages occur, there is a freeze of all companies providing vps's etc. ie. no new player can enter so I am a bit worried about those raising their prices as well honestly which will impact everyone of us as well for these few years in another form of AI tax


Bleh. I was already sad but I hadn't really thought about that specific impact, I can imagine smaller (read: small to big) VPS providers will be forced to raise prices while meta providers (read: AWS) can probably stomach the cost and eat even more of the market.

Exactly. I was thinking of building my own VPS provider on the pain points of development I felt and my father works in broadband business and has his own office and I was thinking of setting up a very small thing there almost the same hardware-alike of homelabbing

But the ram prices themselves are the reason I am forced to not enter this industry for the time being. I have decided right now to save my money for the time / focus on the job/college aspect of things to earn more so that when the timing is right, I would be able to invest my own money into it.

But basically Ram prices themselves are the thing which force us out of this market for the most part. I researched a lot about datacenters recently/ the rabbit hole and as previous hardware gets replaced/new hardware gets added/datacenters get expanded (whether they are a large company or small), I would expect an increase in prices mostly

This year, companies actually still took the cost but didn't want the market to panic so some black friday deals were good but I am not so sure about the next year or the next next year.

This will be a problem in my opinion for the next 1-3 or 4 years in my estimate

Also AWS is really on the more expensive side of things in the datacenters and they are immensely profitable so they can foot the bill while other datacenters (small or semi large) cant

So we will probably see a shift of companies towards using AWS and big cloud providers(GCP,AWS,azure) a bit more when we take all things into account which saddens me even more because I appreciate open web and this might take a hit.

We already see resentment towards these tri-fecta but we will see even more resentment as more and more people realize their roles / the impacts they cause and just overall, its my intuition that average person mostly hate big tech.

It's going to be a weird year in my opinion for this type of business and what it means for the average person.

Honestly for the time being, I genuinely recommend hetzner,upcloud,(netcup/ovh) and some others that I know from my time researching. I think that they are cheaper than aws usually while still being large enough that you don't worry about things too much and there is always lowendtalk if one's interested. Hope it helps but trust me, there is still hope as I talked to these hosting providers on forums like lowendtalk and It might help to support those people too since long term, an open web is the ideal.

Here is my list right now: hetzner's good if you want support + basic systems like simple compute etc. and dont want too much excess stuff

OVH's good: if you want other things than just compute and want more but their support is something which is of a mixed bag

Upcloud's good: if you want both of these things but they are just a bit more expensive if one wants to get large VPS's than the other options.

Netcup's good: Their payment processing was really painful that I had to go through but I think that one can find use case for them (I myself use netcup but although that's because they had a really steal deal once but I am not sure if I would recommend it if there are no deals)

There are some other services like exe.dev that I really enjoy as well and these services actually inspire me to learn more about these things and there are some very lovely people working in these companies.

There is still hope though. So never forget that. Its just a matter of time in my opinion that things get back normal hopefully so I think I am willing to wait till then since that's all we can do basically but overall, yea its a bit sad when I think about it too :<


Now here's someone who did their homework. Thank you.

An important thing to add: the gaming industry was basically the R&D that (partly) led to this AI in the first place. GPUs were gaming devices first and foremost. The programmable pipeline came about because people wanted their video games to look better.

Furthermore, Stable Diffusion was (is) absolutely a large component to all of this. And a lot of that effort was grass roots: random people online can't together to figure out ways to generate better images.

It would be quite ironic if the next revolution comes about on Intel or AMD (or some Chinese company's) hardware because those GPUs were more affordable.


We are now moving to a post human economy. When AGI automates all human labour, the consumer i.e. the bulk of humanity stops mattering (economically speaking). It then just becomes Mega corps run by machines making stuff for each other. Resources are then strictly priorities for the machines over everything else. We are seeing this movement already with silicon wafers and electricity.

> If AI lives up to the hype, it's a portent of how things will feel to the common man.

This hype scenario would be the biggest bust of all for Ai. Without jobs or money then there is nobody to pay Ai to do all the things that it can do, it the power and compute it needs to function will be worth $0.


Or the value of everything non ai drops to zero, which makes the value of ai infinite by comparison.

Either ways it'll be the end of the USD as we know it. But then again such fantasy situations had been "predicted" numerous times and never once came to be a reality.

and if you're unlucky to live close to a datacenter, this could include energy and water? I sure hope regulators are waking up as free markets don't really seem equipped to deal with this kind of concentration of power.

We are certainly seeing citizens wake up to it. There was a proposal for a new datacenter to be built near where I live which was to be voted on, and a large majority of the people voted against it. No one wants higher power and water bills.

AI probably will end up living up to the hype. It won't be on the generation of hardware they are now mass deploying. We need another tock before we can even start to talk about AGI.

> rise to unaffordability

Or require non-price mechanisms of payment and social contract.


Yes. Like theft.

Nothing ever lives up to the hype, that's why it's called hype.

> It's somewhat alarming to see that companies (owned by a very small slice of society) producing these AI thingies (whose current economic is questionable value and actual future potential is up to hot debate)

Some might conclude the same for funds (hedge funds/private equity) and housing.


AI consumes about 30% of DRAM wafers. PE owns about .5% of single family homes.

It going to be misleading to look at the fraction and I think it's misleading to only look at PE investors. It's more important to look at the fraction of demand for homes that are on the market.

Investors bought 1/3 of the US homes sold in 2023. This is, I think, quite alarming, especially since a small amount of extra demand can have a large effect on prices.


That 1/3rd is almost all small time flippers who renovate properties before resale.

Good point. Flippers shouldn't be included in that stat. But I doubt that it's "almost all".

The statistic that matters is the ratio of owner occupied to rented single family homes.


this would need some source citation. There are plenty of investors on the market holding rental properties.

I'm confused. When you say that hedge funds "price out" regular people, what do you mean? Price out of what?

Of the housing market? That seems to be what GP said, doesn’t it?

They create demand which increases price. Plus they can afford to hold their asset longer, this reducing supply.

Stop right there you terrorist antifa leftie commie scum! You are being arrested for thought crime!

They are willing to sacrifice everything in the short term to depress tech salaries long term. The purpose of this exercise is to make you poor.

In the end, with the current market prices, chips factories and data centers are being built all over with the assumption of exponential demand growth. When the excitement and demand for AI cools, we will enjoy the additional capacity and better prices. Also see: fiber bandwidth post 2000. Capital poured in, overbuilding happened, prices collapsed after the crash.

Has work begun on increasing RAM production capacity? My understanding is that these companies are specifically _not_ increasing capacity yet while they wait to see if the bubble bursts or not.

They decreased 2025 production, to increase memory prices, profits and their stock prices,https://news.ycombinator.com/item?id=46419776

Afaik production of nand was reduced as some of the lines can be repurposed for dram that's more in demand.

Significantly increasing supply is also a huge multi year investment into a new fab that'd likely not pay out when the artificial demand breaks down.


> Significantly increasing supply is also a huge multi year investment into a new fab

so, are there huge multi-year investments?


There aren't because nobody is betting on ai demand to last. Then they'd have a couple billion dollar fab sitting around doing nothing and employees that'd have to be fired.

There already was scaling back for dram and and production post COVID, where I believe nand was being sold close to cost because of oversupply


You don't think that as prices go up, the supply might also go up, and the equilibrium price will be maintained?

And possibly even a lower equilibrium will be reached due to greater economies of scale.


That assumes that supply production means can scale up instantly. Fabs for high end chips don't and usually take years from foundations being laid to First chip out of the production line.

In the interim, yeah, they will force prices up.

Additionally those fabs cost billions. Given the lead time I mentioned a lot of companies won't start building them right away since the risk of demand going away is high and the ROI in those cases might become unreachable


I think one of these fab compaies have already invested/talked about investing 700 BILLION dollars (I think it was micron? but I am not sure)

I have heard that in the fab making industry, things moves in cycles and this cycle has been repeated so many times and the reason that ram is so expensive was that at one time during covid there was shortage so they built more factories and they built so so many that these companies took a hit in stocks so they then went and closed and at just the bottom of their factory production levels, the AI bubble started to pop in and need their ram's and now they are once again increasing factory levels

And after the supply due to AI gets closed with the additional compute etc., I doubt it

I think that within 2-3 or maybe 4 years of timeframe ram will get cheaper imo.

The problem is, if someone can fill the market till that time.


The article said a new RAM factory will open in 2027.

Is there any reason to believe that this will happen? Prices of graphic cards only went down after the crypto boom went down again.

And they never went down to pre-crypto pricing. For quite a while, Intel was the only company producing reasonably specced GPUs at somewhat reasonable prices.

> You don't think that as prices go up, the supply might also go up, and the equilibrium price will be maintained?

This assumes infinite and uniformly distributed resources as well as no artificial factors such as lobbying, corruption, taxation or legislation which might favour one entity over the other.

The dream of free market exists only in a vacuum free from the constraints of reality.


First crypto now AI.

I just want to play video games so I don’t have to interact with people


This might be the case already.

https://en.wikipedia.org/wiki/Dead_Internet_theory

Even the multiplayer video games have bots...

Some of the users here are not real at all.

It's logical, if you want to push your product, be promoted on the first page of HN then you have to post fake comments using bots.

-> You get credibility and karma/trust for future submissions, and that's pretty much all you have to do.

Costs about 2 USD, can bring 2'000 USD in revenue, why wouldn't you want to "hustle" (like YC says) ?

Bots are here to grow, it will take time as for now the issue is still small, but you may already have interacted with bots, so do I.


Soon, you might have an AI partner, and never have to interact with people.

It's called a shortage. Chips are highly cyclical. Right now demand is surging and prices and investment are booming. Give it 5 years and I'd bet many in the chips industry will be bemoaning a massive glut and oversupply that sends prices plummeting.

Yes. To put it in slightly different terms, it's alarming that a handful of companies can price all of humanity out of computing goods. And it's even more alarming that those companies don't even need to be profitable.

I think the uncomfortable part is that it's not really about "AI hype" at this point, it's about who gets priority access to scarce inputs.

The cure for high prices is high prices.

Datacenter RAM is heavily utilized. RAM in personal devices is sitting idle 99% of the time. Think of all the ram sitting in work laptops on nights and weekends sitting unused. Big waste of resources.


DUV processes are still a things and perfectly usable for general compute - but not for AI. Rising prices will make them competitive again. And it will require us to ditch Electron (which is a good thing). If anything, we might see a compute renaissance.

Without regulation, money begets money and monopolies will form.

If the American voter base doesn't pull its shit together and revive democracy, we're going to have a bad century. Yesterday I met a man who doesn't vote and I wanted to go ape-shit on him. "My vote doesn't matter". Vote for mayor. Vote for city council. Vote for our House members. Vote for State Senate. Vote for our two Senators.

"Voting doesn't matter, capitalism is doomed anyway" is a self-fulling prophecy and a fed psy-op from the right. I'm so fucking sick of that attitude from my allies.


Jovially -- you simultaneously believe that they're a victim of a psy-op *and* that their attitude is self formed?

;) And you wanted to go ape shit on him... For falling for a psy-op?

My friend, morale is very very low. There is no vigor to fight for a better tomorrow in many people's hearts. Many are occupied with the problems of today. It doesn't take a psy-op to reach this level of hopelessness.

Be sick of it all you want, it doesn't change their minds. Perhaps you will find something more persuasive.


This is a common sentiment but it doesn't make any sense. Voting for the wrong politician is worse than not voting at all, so why is it seen as some moral necessity for everyone to vote? If someone doesn't have enough political knowledge to vote correctly, perhaps they shouldn't vote.

Someone, I can't remember who, explained it better than me, but the gist of it is by not voting, you are effectively checking yourself out of politician consideration.

If we see politician as just a machine who's only job is to get elected, they have to get as many votes as possible. Pandering to the individual is unrealistic, so you usually target groups of people who share some common interest. As your aim is to get as many votes as possible, you will want to target the “bigger” (in amount of potential vote) groups. Then it is a game of trying to get the bigger groups which don't have conflicting interest. While this is theory and a simplification of reality, all decent political party do absolutely look at statistics and survey to for a strategy for the election.

If you are part of a group that, even though might be big in population, doesn't vote, politician have no reason to try to pander to you. As a concrete example, in a lot of “western” country right now, a lot of politician elected are almost completely ignoring the youth. Why ? Because in those same country the youth is the age group which vote the less.

So by not voting, you are making absolutely sure that your interest won't be defended. You can argue that once elected, you have no guarantee that the politician will actually defend your interest, or even do the opposite (as an example, soybean farmer and trump in the U.S). But then you won't be satisfied and possibly not vote for the same guy / party next election (which is what a lot of swing voters do).

But yeah, in an ideal world, everyone would vote, see through communication tactics and actually study the party, program and the candidate they vote for, before voting.


I won't dispute there can be utility in voting, I just disagree with the moralizing.

In fact I think what you said about the older demographics being pandered to by politicians is a great point. Their voting patterns are probably having a net negative impact on society and really they should vote less. But they don't, and so politicians pander to them.


I don't have a stake in forcing people to vote or not, because I generally agree that uninformed people shouldn't be pressured to make a last minute decision if they don't want to. I think everyone knows elections are at their least honest days before the vote.

But to engage with your question, not voting is the same as voting. You are forgoing your voice and giving more weight to the people that do vote. It's limited to your district, yes, but whatever the outcome, you gave the majority power to do that. So it's not surprising that people get frustrated when non-voters see themselves as "outside" of politics, especially when they complain about the state of things.


I'm not so sure not voting is the same as voting (if you meant the opposite my apologies). Imagine the train switch scenario but it's an unknown amount of people on both tracks, do you pull the lever? If you don't, do you still assume culpability for the outcome? I don't think there is a simple or easy answer to that.

Also a lot of people who chose not to vote have become disillusioned by the common narrative around political action, the democratic process, and even the concept of political authority. It's extremely grating to be berated (not saying you, other people) about not voting when they still believe the things their middle school teachers taught them about politics and tend to be the least politically knowledgeable out of everybody.


Just so we're clear the current voter base says this is exactly how it should be.

Just so we're double clear, the other voter base says this is exactly how it should be, using with different words.

All the ills of modern (American) politics stem by the blaming one side for the problems caused by both.


Just so we're clear, the voter base of over a year ago asked for this because they were actively lied to, and were foolish enough to believe said lies.

Current polling however says the current voter base is quite unhappy with how this is


People spend a lot more effort and money lying to the voter base during election years than during the rest of the time.

And the money required to change the voter’s minds is peanuts.

You don’t need to make them happy, just scared of the opposition.


The thing is that while voting matters collectively, it’s insignificant individually: https://en.wikipedia.org/wiki/Paradox_of_voting

Nonvoters aren’t being irrational.


It is very likely that his vote for the parliament literally and legally doesn't matter, depending on the party allegiance of the candidates and the state he is in. All because of the non-democratic ancient first past the post system. Though in his place I would go to the station and at least deface a ballot as a sign of contempt.

What regulation are you expecting to be passed and why do you believe monopolies are bad?

If a monopoly appears due to superior offerings, better pricing and quicker innovation, I fail to see why it needs to be a bad thing. They can be competed against and historically that has always been the case.

On the other hand, monopolies appearing due to regulations, permissions, patents, or any governmental support, are indeed terrible, as they cannot be competed against.


Some of us dont vote because we just dont really think the outcomes matter.

As long as there is still a way to make money then nothing else really matters as money is the only thing that can buy you a semblance of happiness and freedom. Enough money and you can move to whatever country you want if things get bad enough too in the US.


> Without regulation, money begets money and monopolies will form.

Ahem, you'll find that with regulation, money begets money and monopolies will form. That is, unless you magically produce legislators which are incorruptible, have perfect knowledge and always make the perfect choice.

Even the Big Bang was imperfect, and matter clumps together instead of being perfectly distributed in the available space.


Supply should increase as a response to higher prices, this bringing prices down.

Rational actors in game know that this demand spike is most likely temporary. So investing in more production only to face glut in future dropping margins to nearly nothing is not rational move.

This has been played out before so it is only natural that they are careful with increasing the supply. And while they don't response they are netting larger margins than before.

Obvious end result is that demand will drop as price goes up. The other natural part of supply-demand curve.


That's economical theory, but the real world is often non-linear.

Crucial is dead. There's a finite amount of rare earth. Wars and floods can bankrupt industries, supply chains are tight.


Those are all temporary events and circumstances.

If the market is big enough, competitors will appear. And if the margins are high enough, competitors can always price-compete down to capture market-share.


Competitors will appear? You can't build a DRAM production facility in a year. You probably even can't in two years.

Also, "price-compete down to capture market-share"? Prices are going up because all future production capacity has been sold. It makes no sense to lower prices if you don't have the capacity to full fill those orders.


> Crucial is dead.

Micron stopped selling to consumers to focus on the high margin enteprise market. Might change in the future.


The business that owns crucial is producing more chips than ever.

Rare earth metals are in the dirt around the world.

Supply and demand curves shifting, hence prices increasing (and decreasing) is an expected part of life due to the inability to see the future.


> Rare earth metals are in the dirt around the world.

They are. The problem is, the machinery to extract and refine them, and especially to make them into chips, takes years to build. We're looking at a time horizon of almost a decade if you include planning, permits and R&D.

And given that almost everyone but the AI bros expects the AI bubble to burst rather sooner than later (given that the interweb of funding and deals more resembles the Habsburg family tree than anything healthy) and the semiconductor industry is infamous for pretty toxic supply/demand boom-bust cycles, they are all preferring to err on the side of caution - particularly as we're not talking about single billion dollar amounts any more. TSMC Arizona is projected to cost 165 billion dollars [1] - other than the US government and cash-flush Apple, I don't even know anyone able, much less willing to finance such a project under the current conditions.

Apple at least can make use of TSMCs fab capacity when the AI bros go bust...

[1] https://www.tsmc.com/static/abouttsmcaz/index.htm


Aren't rare earth metals used mainly for batteries, not chips ?

I guess people might be mixing up all the headlines of all the articles they did not read by this point.


Chips also need rare doping materials, plus an absurd level of purity for the silicon. The problems are the same no matter if we're talking about chips or batteries.

It's not even the economical theory. Supply should not increase to increased demand. They want more profit, and if less supplies is what accomplishes that, they will absolutely keep the supplies constant and manufacture a scarcity. This is the economical theory.

As far as I can tell, none of the companies producing memory chips are increasing production because they don't know if the current demand is sustainable.

Increasing memory production capacity is a multi-year project, but in a few years, the LLM companies creating the current demand might all have run out of money. If demand craters just as supply increases, prices will drastically decrease, which none of these companies want.


You are wrong. Memory production is being expanded in 2026 and will expand further in 2027 and 2028 as the memory suppliers catch up on fab shell capacity.

How does this square with some companies just stopping sales to consumers altogether?

This is exactly it: supply of high margin products is increasing at the cost of low margin products. Expect the low end margin to catch up to the high end as long as manufacturing capacity is constrained (at least 1 year).

They aren't making less though.

The blessing the plebs with eternal wisdom comment.

Take a look at GPU prices and how that "supply increased thus bringing the prices down"


As usual, the problem is: how fast does this happen?

in a fairy world

No just stop being cynical. The reason almost every electronic item is cheaper now than 2 decades back is just becuase the demand(and thus supply) is higher.

I can't tell if this is sarcasm or not. I'd argue it's more the result of the CCP bankrolling the Chinese electronics industry to the point where roughly 70% of all electronics goods are produced in China. The concentration of expertise and supply chains is staggering, and, imo, was really born out of geopolitical strategy.

No, its not. Transistors used to cost $1. Now they cost $1/billion or something. It's all because the 10s of billions of fixed cost incurred by fab is shared among customers. If we create less chips, the fixed cost wont reduce.

> of the CCP bankrolling the Chinese electronics industry to the point where roughly 70% of all electronics goods are produced in China.

But we don't see this bankrolling in absolute values. Rather, it's due to regressive taxation, low (cheap) social security for workers, and very weak intellectual property protection.


Calling the greatest and last invention of man "AI thingies" is telling of why our society will split into tech and non tech communities in the future like all the science fiction authors have predicted.

> Calling the greatest and last invention of man

There are several inventions which are far greater than LLMs. To name two: computers and methods to generate electricity, things without which LLMs wouldn’t have been possible. But also harnessing fire, the wheel, agriculture, vaccines… The list goes on and on.

Calling LLMs “AI thingies” seems much more in tune with reality than calling them “the greatest invention of man” (and I’m steel manning and assuming you meant “latest”, not “last”). You can’t eat LLMs or live in them and they are extremely dependent on other inventions to barely function. They do not, in any way, deserve the title of “greatest invention”, and it’s worrying that we’re at that level of hyperbole. Though you’re certainly not the first one to make that claim.

https://finance.yahoo.com/news/alphabet-ceo-sundar-pichai-sa...


Am not talking about LLMs this is like making an argument about DC power. I am talking about inventing intelligence which is greater even than fire.

He did mean 'last', maybe don't steelman these arguments so dutifully?

Speak for yourself, friend. I don't believe you and think you're making a tragic mistake, but you're also my competition in a sense, so… you have fun with that.


I meant last. The concept of calling it that way is historical and it shows you have never really deep dived into the concept of AI at all if you don't know who used it and why.

Ah, yes, the ad hominem attack with a dash of appeal to authority and just enough vagueness to attempt to skirt any criticism. Classic. Well, I for one am thoroughly convinced by that argumentative prowess, you sure showed me.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: