SSI is an interesting idea, but the actual advantage is mostly to improve efficiency when running your distributed code on a single, or few nodes. You still have to write your code with some very real awareness of the relevant issues when running on many nodes, but now you are also free to "scale down" and be highly efficient on a single node, since your code is still "natively" written for running on that kind of system. You are not going to gain much by opportunistically running bad single-node codes on larger systems, since that will be quite inefficient anyway.
Also, running a large multi-node SSI system means you mostly can't partition those nodes ever, otherwise the two now-separated sets of nodes could both progress in ways that cannot be cleanly reconciled later. This is not what people expect most of the time when connecting multiple computers together.
You could say the same thing about multiple cores or CPUs. A lot of people write apps that aren't useful past a single core or CPU. Doesn't mean we don't build OSes & hardware for multiple cores... (Remember back when nobody had an SMP kernel, because, hey, who the hell's writing their apps for more than one CPU?! Our desktops aren't big iron!)
In the worst-case, your code is just running on the CPU you already have. If you have another node/CPU, you can schedule your whole process on that one, which frees up your current CPU for more work. If you design your app to be more scalable to more nodes/CPUs, you get more benefits. So even in the worst case, everything would just be... exactly the way it is today. But there are many cases that would be benefited, and once the platform is there, more people would take advantage of it.
There is still a massive opportunity in general parallel computing that we haven't explored. Plenty of research, but along specific kinds of use cases, and with not nearly enough investment, so the little work that got done took decades. I think we could solve all the problems and make it generally useful, which could open up a whole new avenue of computing / applications, the way more bandwidth did.
(I'm referring to consumer use-cases above, but in the server world alone, a distributed OS with simple parallel computing would transform billion-dollar markets in software, making a whole lot of complicated solutions obsolete. It might take a miracle for the code to get adopted upstream by the Linux Mafia, though)
> It might take a miracle for the code to get adopted upstream by the Linux Mafia, though
The basic building block is containerization/namespacing, which has been adopted upstream. If your app is properly containerized, you can use the CRIU featureset (which is also upstream) to checkpoint it and migrate it to another node.
Also, running a large multi-node SSI system means you mostly can't partition those nodes ever, otherwise the two now-separated sets of nodes could both progress in ways that cannot be cleanly reconciled later. This is not what people expect most of the time when connecting multiple computers together.