Hi all, I’ve been venturing for months in this amazing self-hosted hobby and for the last couple of days I’m reading and trying to understand kubernetes a bit more, I’ve followed this article :
that helps you set up the lightweight Kubernetes version (K3s) and use Portainer as your management dashboard, and it works flawlessly, as you guys can see I’m just using two nodes at the moment.
And I’m using “helm” to install packages and the site ArtifactHUB to get ready to use repository to add into portainer Helm section (still in beta) but works flawlessly, I’ve installed some packages and the apps works just as I expected, but there’s seem to be a shortage of ready to use repository as it’s the case with docker alone, like with Plex the only way I got plex running in K3s is with KubeSail with offers an unofficial apps section that includes plex and tons of other well known apps, but strangely enough there are labeled unofficial but still works perfect when installed, but portainer would label all apps installed from KubeSail as external.
Now I think I get the use of kubernetes, it’s to have several nodes to use as recourses for your apps and also like a load balance if one node fails your services/apps can keep on running? (like raid for harddisks?)
All tough it was fun learning atleast the basic of Kubernetes with my two nodes, is it really necessary to go full blown out with only kubernetes? Or is Docker just fine for the majority of us homelad self hosted folks?
And is what I’m learning here the same in enterprise environments? Atleast the basics?
K8s can allow you to build a reliable and mostly self sufficient suite of tools for your home lab. There is a lot of upfront cost to get there. However, I’d argue k8s isn’t actually all that more complex than running individual docker containers. In both cases you need to have an understanding of networking, containers, proxies, databases, and declarative config of some form or another. K8s just provides primitives that make it really easy to build more complex container projects up declaratively. It doesn’t mean it has to be complex. I run 5 or 6 different services with individual backing Postgres DBs. I source the containers from docker hub just like you would in docker. Certbot will auto deploy certs for any service I set up this way. HA proxy will auto add domains and upstreams for them too. When I want to setup a new service I often just copy and paste an existing service manifest and do a find and replace with a new service name. At that point I can usually just apply the manifest and wait 5 min. My service will be up, available on the internet, and already have SSL certs.
I’ll add, if you have really complex projects with tons of micro services you can deploy a helm chart for that in two commands. Even with minimal or no knowledge about how it should be setup.
I had 3 R640s at home churning kubernetes for about a year. The main benefit of it for me was that it was an enjoyable and fun learning experience. Ultimately I received no real benefit in availability due to just city power issues and me not wanting to shell out for that large of a UPS to prevent a kubernetes cluster disaster. Ultimately due to energy use and the heat, I scaled back to running a bunch of docker loads on a NUC extreme, and one of my R640s. Of course you could also run k3s on a string of raspberry pis or something too.
If your homelab is for learning, and you want to learn it, why not?
Can someone link to that Adolf Hitler rant about containers running in containers running in a "lightweight " VM video?
Here you go, https://youtu.be/9wvEwPLcLcA?si=loZgvThxgryDIoYy
Edit: thanks for the reminder that this exists haha
K8s is not worth it for the average homelab user. But the whole point of self hosted to do way to complicated stuff for fun so…