It is because at the time I was doing a lot of python development, and I was (and still) using my server as a dev workstation.
Isolation with virtualenv was not great and many projects were needing conflicting versions of system package, or newer version than what Debian stable had.
Lot of the issue was me messing around \o/
"just having the Kubernetes server components running add a 10% CPU on my Intel(R) Atom(TM) CPU C2338 @ 1.74GHz."
Containerization is not a win here. Where's the second machine to fail over to?
I think it is worth it in order to get a centralized control plane for everything and automatic build and deployment for eveything.
But I agree with you, some apps (postfix, dovecot) doesn't feel great inside a container (Sharing data with UID issue is mewh, postfix with multiprocess design also...)
I just wanted to have everything manage into containers, as they were the last ones, so I moved them into.
> I was (and still) using my server as a dev workstation
This seems like a very bad idea, and I'm not at all surprised you had problems. But it doesn't look like the problems were with the server part; if your machine had only been a server you could have avoided all the stuff about needing to pull from unstable. So I don't think "don't put all the server stuff on one machine" is the real takeaway from your experience; I think the real takeaway is "don't use the same machine as both a server and a dev workstation".
Well, at that point you just move the problem from "how to manage the home server" to "how to manage the dev workstation". You need somewhere where you can install not just random Python packages but also random databases, task queues etc. during development. I guess "accept that your dev box will always be flaky and poorly understood, you'll have to spend time productionising anything before you can deploy it anywhere else, and if you replace it you'll never get things set up quite the same" is one possible answer (and perhaps the most realistic), but it's worth looking for a better way.
> at that point you just move the problem from "how to manage the home server" to "how to manage the dev workstation"
No, you separate it into two problems that are no longer coupled to each other. The requirements for a server are very different from those for a dev workstation, so trying to do both on the same machine is just asking for trouble.
> You need somewhere where you can install not just random Python packages but also random databases, task queues etc. during development.
Yes, that's what a dev workstation is for. But trying to do that on the same machine where you also have a server, which doesn't want all that stuff, is not, IMO, a good idea.
> I guess "accept that your dev box will always be flaky and poorly understood
It will be as flaky and poorly understood as the code you are developing and whatever it depends on, yes. :-)
But again, you don't want any of that on a machine that's a server. That's why it's better to have a server on a different machine.
The biggest objection in this thread is to the 10% overhead of containers, so it seems strange to see the 100% overhead of two separate computers as a better solution.
And at some point the code has to go from dev code to production code. If you're managing dev and production in different ways, then you're going to have to spend significant time "productionising" your dev code (listing dependencies in the right formats etc.). And the bigger the gap between the machine you develop on and the machine you deploy to, the higher the risk of production-only bugs. So keeping your dev workstaion as similar as possible to a production server - and installing dependencies etc. in a way that's compatible with production from day 1 - makes a lot of sense to me.
We seem to be talking about different kinds of servers. You say:
> at some point the code has to go from dev code to production code. If you're managing dev and production in different ways, then you're going to have to spend significant time "productionising" your dev code
This is true, but as I understand the article we are talking about, it wasn't talking about a dev workstation and a production server for the same project or application. I can see how it could make sense to have those running on the same machine (but probably in containers).
However, the article was talking about a dev workstation and a home server which had nothing to do with developing code, but was for things like the author's personal email and web server. Trying to run those on the same machine was what caused the problems.
I presume what the author is developing is code that they're eventually going to want to run on their home server, at least if they get far enough along with it. What else would the end goal of a personal project be?
Reading this chain, you seem to want it both ways, that a Dev machine runs unstable config and is in an unknown state due to random package installation, but the same machine should be stable and reproducible.
Yes, that's exactly why the OP's approach is appealing! I want it to take minimum effort to install some random new package/config/dependency, but I also want my machine to be stable and reproducible.
Lot of the issue was me messing around \o/
I think it is worth it in order to get a centralized control plane for everything and automatic build and deployment for eveything.But I agree with you, some apps (postfix, dovecot) doesn't feel great inside a container (Sharing data with UID issue is mewh, postfix with multiprocess design also...)
I just wanted to have everything manage into containers, as they were the last ones, so I moved them into.