Hi all, I’ve been venturing for months in this amazing self-hosted hobby and for the last couple of days I’m reading and trying to understand kubernetes a bit more, I’ve followed this article :
that helps you set up the lightweight Kubernetes version (K3s) and use Portainer as your management dashboard, and it works flawlessly, as you guys can see I’m just using two nodes at the moment.
And I’m using “helm” to install packages and the site ArtifactHUB to get ready to use repository to add into portainer Helm section (still in beta) but works flawlessly, I’ve installed some packages and the apps works just as I expected, but there’s seem to be a shortage of ready to use repository as it’s the case with docker alone, like with Plex the only way I got plex running in K3s is with KubeSail with offers an unofficial apps section that includes plex and tons of other well known apps, but strangely enough there are labeled unofficial but still works perfect when installed, but portainer would label all apps installed from KubeSail as external.
Now I think I get the use of kubernetes, it’s to have several nodes to use as recourses for your apps and also like a load balance if one node fails your services/apps can keep on running? (like raid for harddisks?)
All tough it was fun learning atleast the basic of Kubernetes with my two nodes, is it really necessary to go full blown out with only kubernetes? Or is Docker just fine for the majority of us homelad self hosted folks?
And is what I’m learning here the same in enterprise environments? Atleast the basics?
K8s helps me a lot to understand what I don’t know but nothing more than that. You need tons of studying to know what is going on beyond the scope of k8s.
Not only k8s is solid overkill for the homelab but also most of self hosted services are not designed to be deployed in k8s pods. So it won’t just work.
In case you want to learn something through deploying k8s, it doesn’t help you much either. Learning networking is much better option instead.
I disagree. You can deploy nearly anything from docker hub or some other container registry in k8s with little to no trouble. Can you give some examples?
Applications like gitea
, nextcloud
, or home assistant
won’t just work. And adguard
, qbittorrent
would just work but you need to how k8s works to configure properly. Cert like cert-manager
needs to understand either compared to Docker one like npm
. Also you cannot deploy 2 replicas of vaultwarden
.
I mean, if you have a strong understanding of k8s you can do whatever you want, but many self hosted apps are not designed to be deployed in k8s. I am sure about that.
Based on my experience, I suffered tons of errors and not just working so many times, I made it eventually though.
I want to ask you a question. Have you deployed anything on k8s? If you ever deployed self hosted apps on k8s, I think it is really hard to disagree my humble opinion.
Gitea, Flux, Pi-hole (HA), Joplin sync, all the Postgres to support those, synapse server, and vault warden. I have a Postgres for each but use longhorn so have 3x replication. If one node dies postgres just spins up on another host and grabs the longhorn volume. Longhorn is running atop one usb drive for each pod. All nodes are raspberry Pi’s. If I wanted to I could run HA postgres but I can live with a few min downtime on anything but DNS which is HA.