The real problem is deeper than this. The actual question to ask is:
"What if we just stopped distributing and blindly executing untrusted binary blobs?"
A trusted compiler in the OS, and some set of intermediate representations for code distribution would solve a massive amount of security issues, increase compatibility, and allow for future performance increases and disallowing suspect code patterns (spectre, rowhammer, etc). Specializing programs at install time for the local hardware makes way more sense than being locked into hardware machine code compatibility.
That does nothing to fix the vast majority of security issues, which are caused by trusted but not memory safe programs running on untrusted input.
It's also an extremely unrealistic goal. First of all, you run into a massive problem with companies and copyright. Second of all, it will be very hard to convince users that it's normal for their Chrome installation to take half an hour or more while using their CPU at 100% the whole time.
"The Verified Software Toolchain project assures with machine-checked proofs that the assertions claimed at the top of the toolchain really hold in the machine-language program, running in the operating-system context."
Some of the same researchers worked on TAL (typed assembly language), which sounds like it could be one of the "intermediate representations" you mentioned.
For a while, Apple required apps to be submitted as bitcode (LLVM IR) to the App Store, where they would be converted to x86 or Arm machine code during device install. They stopped that a couple of years ago, after migration to Apple Silicon.
Thanks for the correction and link. Relevant to the comment above about binary blobs:
> The idea to have apps in a CPU neutral form available on the app store is not a bad one; that's why Android chose Java Byte Code (JBC). Yet JBC is a pretty stable representation that is well documented and understood, Bitcode isn't. Also on Android the device itself transforms JBC to CPU code (AOT nowadays).
The general idea is correct, but a rather significant detail is not.
Android did not choose Java bytecode, it chose Dalvik bytecode [1]. This was done for several reasons, but at a high level, Dalvik bytecode more closely matches how real computers work while Java bytecode targets a more abstract virtual machine. This result is that Dalvik bytecode is easier to transform into machine code, and more optimizations are done in the source code -> bytecode phase rather than the bytecode -> machine code phase, relative to Java bytecode.
> Programs for Android are commonly written in Java and compiled to bytecode for the Java Virtual Machine, which is then translated to Dalvik bytecode.. The successor of Dalvik is Android Runtime (ART), which uses the same bytecode and .dex files (but not .odex files), with the succession aiming at performance improvements.
I think it's more a case that browsers take security into account at the feature design phase, whereas other applications don't. That's actually a huge step in the right direction. Same thing with mobile OSes, which have a very preferable decision to sandbox individual applications, instead of running them with full user permissions & full user data access, like desktop OSes do.
Now, whether the browsers or mobile OSes actually are secure because of that, is a separate thing, but those are good steps to take.
> which have a very preferable decision to sandbox individual applications, instead of running them with full user permissions
It's great that they took security into account during the design phase. I wish they had also taken into account user empowerment. They sandboxed all the apps and in so doing made interoperation, plugins, patches, mods, etc basically impossible. Now the most widely-used form of personal computer is more like a portal to digital services than it is a computing platform. It's sad to see, and I refuse to believe that it's one-or-the-other when it comes to security vs power.
Absolutely - current windows is truly horrible, ads, ads, ads, crash :( If I were designing something to be deliberately distracting, annoying, and confusing, it would look a lot like windows!
Besides proton/wine/VMs for gaming which are all very good, console/computer emulation for gaming is solid in Linux, including literal plug and play joystick support.
REU is a bit of a cheat, and for some releases people are using cartridges with far higher capacities than ever sold back in the day. Cross-platform tools and emulation are also huge dev speedups.
But a big thing is that back then, games were often made on a timescale of months, and with a budget. Hobbyist dev has no limits to how much time they want to sink into doing something properly. Ideas & techniques have had many years/decades to percolate into existence.
I use onion addresses to have basically a hash-addressable socket on the internet, which can be relocated without any additional configuration. It can even work without any inbound ports on the firewall.
However, I haven't done any serious roaming with such a configuration; has anybody tried it?
Yes. I can't stand that IPFS basically took the Freenet model, stripped out all the anonymity & privacy, and added the weird "interplanetary" marketing push.
Facebook (and most other businesses in the space) is a system of manipulating and editorializing individuals' social communication in order to extract engagement-related profit.
This is not a common carrier, but an active distorter.
"What if we just stopped distributing and blindly executing untrusted binary blobs?"
A trusted compiler in the OS, and some set of intermediate representations for code distribution would solve a massive amount of security issues, increase compatibility, and allow for future performance increases and disallowing suspect code patterns (spectre, rowhammer, etc). Specializing programs at install time for the local hardware makes way more sense than being locked into hardware machine code compatibility.