There should be plenty of existing programming models that can be reused because HPC used single-image multi-hop NUMA systems a lot before the Beowulf clusters took over.
Even today, I think very large enterprise systems (where a single kernel runs on a single system that spans multiple racks) are built like this, too.
I think it's an interesting model. Somehow, the maintenance needs to be funded, and that is an ongoing effort. Charging for security updates is not ideal, but I'm not sure what the alternative would be.
It seems like it would be cheaper and more effective to just keep in sync with GrapheneOS rather than maintaining a custom fork.
I understand that maintenance still isn't free in that case, but it seems like they went out of their way to make more maintenance work for themselves, and then they asked their customers to pay for it. As a potential customer, I would've rather it just come with standard GOS rather than paying yearly for a fork that probably isn't as secure.
Also if it's mandatory? I would also say it's desirable to prevent the situation in which users just choose to have zombie devices because security is more expensive, but making them free or making them mandatory paid would both work for that
How would they make it mandatory, though? The only way I can think of making it mandatory would be if the phone bricks itself when the subscription ends. Or if you just lease the phone and the lease includes updates.
It seems like the best approach would be to just include the cost of updates in the price of the phone, which I guess is what every other phone maker does.
It's not dynamic linking, despite excellent support for very late binding in historic Java versions. (Newer versions require specific launcher configurations to use certain platform features, which breaks late loading of classes that use those features.)
Bundling the JRE in the bundle typically results in something that is not redistributable with the default OpenJDK license: The Java ecosystem is heavily tilted towards the Apache license, but Hotspot is licensed under the GPL v2 only (no classpath exception). The Apache license and older GPL versions (before 3) are generally assumed to be incompatible.
Every modern openjdk build is licensed as GPLv2 + classpath exception. That exception includes hotspot, since it's part of the jvm. That exemption allows shipping the JVM with your app or linking to it. Otherwise a bunch of enterprise software couldn't exist.
> I am also deeply concerned about the “speculative” data center market. The “build it and they will come”strategy is a trap. If you are a hyperscaler, you will own your own data centers.
Is this actually true? I thought that hyperscalers keep datacenters at arm's length, using subsidiaries and outsourcing a lot of things.
Hyperscalers use various subsidiaries and shell companies to dodge taxes and keep the debt off their balance sheets so they can keep their AAA ratings, but ultimately the resulting datacenters are still entirely owned and operated in-house. Hyperscales do not and will not use any of these “DC as a service” startups, meaning those startups have to find customers elsewhere; the big question mark is whether enough of those customers exist.
> If an LLM is typing that code - and it can maintain a test suite that shows everything works correctly - maybe we don't need that abstraction after all.
I'm worried that there is a tendency in LLM-generated code to avoid even local abstractions, such as putting common code into separate (local functions), and even use records/structures. You end up with code that is best maintained with an LLM, which is good for the LLM provider and their future revenue. But we humans as reviewers and ultimate long-term maintainers benefit from those minor abstractions.
Yeah, I find myself needing to watch out for that. I'll frequently say "refactor that to reduce duplicated code" - which is generally very safe once the LLM has added test coverage for the new feature.
We're slowly getting back to similarly-sized systems. IBM now has POWER systems with more than 1,500 threads (although I assume those are SMT8 configurations). This is a bit annoying because too many programs assume that the CPU mask fits into 128 bytes, which limits the CPU (hardware thread) count to 1,024. We fixed a few of these bugs twenty years ago, but as these systems fell out of use, similar problems are back.
> Driven by 1,024 Dual-Core Intel Itanium 2 processors, the new system will generate 13.1 TFLOPs (Teraflops, or trillions of calculations per second) of compute power.
This is equal to the combined single precision GPU and CPU horsepower of a modern MacBook [1]. Really makes you think about how resource-intensive even the simplest of modern software is...
Note that those 13.1 TFLOPs are FP64, which isn't supported natively on the MacBook GPU. On the other hand, local/per-node memory bandwidth is significantly higher on the MacBook. (Apparently, SGI Altix only had 8.5 to 12.8 GB/s.) Total memory bandwidth on larger Altix systems was of course much higher due to the ridiculous node count. Access to remote memory on other nodes could be quite slow because it had to go through multiple router hops.
The most reasonable reading of that description is that VSCode itself is open source, not that it is only intended for editing open source software. Furthermore, nothing in the license suggests that. If that was their intent they were very much not clear about communicating it.
The AGPL does not prevent offering the software as a service. It's got a reputation as the GPL variant for an open-core business model, but it really isn't that.
Most companies trying to sell open-source software probably lose more business if the software ends up in the Debian/Ubuntu repository (and the packaging/system integration is not completely abysmal) than when some cloud provider starts offering it as a service.
Even today, I think very large enterprise systems (where a single kernel runs on a single system that spans multiple racks) are built like this, too.
reply