I see people having a small 8 gigs and 4 core system and trying to split that with something like proxmox into multiple VMs. I think that’s not the best way to utilise the resources.
As many services are mostly in ideal mode so in case something is running it should be possible to use the complete power of the machine.
My opinion is using docker and compose to manage things on the whole hardware level for smaller homelab.
Only split VMs for something critical, even decide on that if it’s required.
Do you agree?
Some people play with VMs for fun, or as a leaning experience. I don’t think it’s very productive or useful to tell them they’re doing it wrong.
and here i am with a rpi running two VMs 🤭
No, I don’t agree, not necessarily. VMs are “heavier” as in use more disk and memory but if they are mostly idling and in a small lab you probably won’t notice the difference. Now if you are running 10 services and want to put each in its own vm on a tiny server, then yea, maybe don’t do that.
In terms of cpu it’s a non-issue. Vm or docker they will still “share” cpu. I can think of cases I’d rather run proxmox and others I’d just go bare metal and run docker. Depends on what I’m running and the goal.
Proxmox and LXCs vs Docker is just a question of your preferred platform. If you want flexibility and expandability then proxmox is better, if you just want a set and forget device for a specific static group of services, running debian with docker may make more sense to you.
On one hand, I think VMs are overused, introduce undue complexity, and reduce visibility.
On the other hand, the problem you’re citing doesn’t actually exist (at least not on Linux, dunno about Windows). A VM can use all of the host’s memory and processing power if the other VMs on the system aren’t using them. An operating system will balance resource utilization across multiple VMs the same as it does across processes to maximize the performance of each.