I’m thinking about starting a self hosting setup, and my first thought was to install k8s (k3s probably) and containerise everything.
But I see most people on here seem to recommend virtualizing everything with proxmox.
What are the benefits of using VMs/proxmox over containers/k8s?
Or really I’m more interested in the reverse, are there reasons not to just run everything with k8s as the base layer? Since it’s more relevant to my actual job, I’d lean towards ramping up on k8s unless there’s a compelling reason not to.
Unless you have multiple systems, I don’t think k8s will yield much benefit over plain docker.
Containers, unless you have a specific need for a VM.
With a VM you have to reserve resources exclusively. If you give a VM 2gb of ram, then that’s 2gb of ram that you can’t use for other things, even if the guest OS is using less.
With Containers, you only need as many resources as the process inside the container requires at the time.
Why not do both? I run proxmox on my physical hardware, then have guest VMs within proxmox that run k8s.
Advantages of proxmox:
- Proxmox makes it easy to spin up VMs for non self host purposes (say I want to play with NixOS)
- Proxmox snapshots make migrations and configuration changes a bit safer (I recently messed up a postgres 15 migration and was able to roll back in a button press)
You can then just run docker images through Proxmox, but I like k8s (specifically k3s) because:
Advantages of k8s:
- Certmanager means your HTTP services automatically get assigned TLS certs essentially for free (once you’ve set up cert manager for the first time, anyway)
- I find k8s’ YML-based configuration easier to track and manage. I can spin my containers up fresh just from my config, without worrying about stray environment settings I might not have backed up.
- k8s makes it easy for me to reason about which services are exposed internally to each other, and which are exposed on the host outside of my k8s cluster.
- k8s services get persistent DNS and IPs within the cluster, so configuring nodes to talk to each other is very easy.
And yeah, this way I get to learn two technologies rather than one 😁
What I did is install proxmox on the bare metal, setup a vm in which I put the containers.
Proxmox itself stays (almost) completely stock. The only changes I’ve made to it were to add the NUT client package so it could gracefully shut down if my NUT server indicates that the UPS is running out of power during an outage.
In your VMs you can do whatever. Setup OMV, or a stock Ubuntu or Debian vm and install your services on the VM or use Docker/Podman. Setup Fedora CoreOS or IoT vms and host all your services in Podman containers.
The great thing about Proxmox is you can do snapshot backups which take mere moments to complete. Then pass those off to a NAS where they can survive a irreparable loss of your Proxmox server.
You can also spin up new vms as needed to just try to fuck around with new techs or just a new way of setting up your home lab. It gives you a ton of flexibility and makes backing stuff up way easier.
Another great thing you can do is if 3 years down the line you are looking to replace your server hardware with some newer or more powerful stuff you can just add the new device as a node to the cluster. Then you can migrate all your existing VMs over to your new hardware and decommission your old one with very little to no downtime on anything.
This is my exact setup as well. Proxmox with one beefy vm dedicated just to docker and then a few other vms for non docker workloads (eg, home assistant, pihole, jelltfin). I can probably run those in docket as well, but the to worked better as vms when I set them up
Appreciate your take on this and specifically mentioning that you have a VM for Home Assistant. That was a lightbulb moment for me as I like how easy it is to manage updates as an OS install rather than in a Docker container. If I ever get around to rebuilding my server architecture I’m definitely going to do this!
I’d suggest looking into k8s. It’s definitely a bit more complex on the start, but so much more power once you get to the details. VMs you don’t share the base OS layer and the hardware, you have to pre-define the resources you need per app in a more constrained manner, while containers can move freely in their little sandbox to pickup whatever it needs.
It is also much easier to manage replicas, upgrades, scale and a bunch of other things once you are using containers and an orchestrator like Kubernetes. Let me know if you need any help/insights. I’ve been trying to post more videos/answers about things that could be complicated.
If everything you want to run makes sense to do within k8s it is perfectly reasonable to run k8s on some bare-metal OS. Some things lend themselves to certain ways of running them better than others. E.g. Home Assistant really does not like to run anywhere but a dedicated machine/VM (at least last time I looked into it).
Regardless of k8s it may make sense to run some sort of virtualization layer just to make management easier. One panel you can use to access all of the machines in you k8s cluster from a console level can be pretty nice, and a Proxmox cluster would give you this. You can make a VM on a host that takes up basically all of the available RAM/CPU on it. Proxmox specifically has some built-in niceties with gluster (which I’ve never use, I manage gluster myself on bare metal) which could even be useful inside a k8s cluster for PVCs and the like.
If you are willing to get weird (and experimental) look into Rancher’s Harvester it’s an HCI platform (similar to Proxmox or vSphere) that uses k8s as its base layer and even manages VMs through k8s APIs… I played with it a bit and it was really neat, but opted for bare metal Ubuntu for my lab install (and actually moved from rke2 to k3s to Nomad to docker compose with some custom management/clustering over the course of a few years).
I think it depends on your scale. If homelab stuff docker is awesome IMO.
Why not use both? I have PVE installed on all of my hosts and then use k3s/docker in VMs. If there ever is anything you don’t want to or just can’t deploy as a container (e.g. opnsense, hassio, truenas, windows [for whatever reason you might have]), you can just spin it up as a VM and not worry about adding and maintaining another physical machine
If you are using PvE for linux “VMs” those probably aren’t actually VMs but LXC containers. And if you are running docker in one of those, you’ve got containers in your containers.
Welcome to the club.
Yo dawg. I heard you like containers.