Its new homelab time. And with that, potentially a new OS time too.
I currently am very happy with Debian and Docker. The only issue is I am brand new to using data redundancy. I have a 2 bay NAS I’ll use, and I want the two HDDs to be in raid 1.
Now I could definitely just use ZFS or BTRFS with Debian, and be able to use Docker just like I do currently.
Or I could use a dedicated NAS OS. That would help me with the raid part of this, but a requirement is Docker.
Any recommendations?
Debian and the standard linux mdraid?
Do you mean mdadm? https://raid.wiki.kernel.org/index.php/A_guide_to_mdadm If not can I have a link?
Yeah, that’s what he means.
I’m doing kinda the same thing with my NAS: md raid1 for the SSDs, but only snapraid for the big data drives (mostly because I don’t really care if i have to re-download my linux iso collection, so snapraid plus mergerfs is like, sufficient for that data).
Also using Ubuntu instead of Debian, but that’s mostly due to it being first built six years ago, and I’d 100% go with Debian if I was doing it now.
Yes, as the other people pointed out, that’s what I mean. The standard Linux software RAID (also called MD RAID)
It’s proven, battle-tested, pretty robust and you don’t rely on any specific vendor formats or any hardware for that matter. The main point would be to keep it simple. You could use BTRFS or ZFS or all kinds of things. But it only introduces additional complexity and points of failure. And has no benefits over a plain mirror (what the RAID1 does) if we’re talking about just 2 devices. At least it served me well in the past. Contrary to cheap hardware RAID controllers and also BTRFS which also let me down once. But a lot of development went in to that since then and the situation might have changed. But mdraid is reliable anyways.
I’d suggest lvmraid which is just mdraid wrapped in LVM. It’s a tad simpler to setup and you get the flexibility of LVM, plus the ability to convert from linear to mirror and back as needed. That is you could do a standard install on LVM, then add another disk to LVM and convert the volumes to RAID1. It’s all documented under man lvmraid
.
Unraid and Truenas are pretty popular. Openmediavault is less popular, but a pretty simple system based on Debian.
I’ve been happy with unraid, super simple to use and the community apps makes it easy to find and install docker containers
Unraid is great and I have been using it for over a decade now, but a paid OS on a 2bay nas seems excessive
TrueNAS SCALE expects you to deploy Kubernetes clusters, it is unfortunately not meant for running plain Docker. You can jump through hoops to get it working but I personally gave up and ended up running a VM on top of TrueNAS just to run Docker on it.
I don’t know about Unraid though and OpenMediaVault felt a bit unpolished the last time I used it and I can’t attest for its ZFS support.
Truenas scale is switching to docker compose. I found this out when the truecharts catalog suddenly stopped working. more info
I am currently using Openmediavault for my NAS and can confirm that with an official plugin so far I havent had any issue with my ZFS pool (that I migrated from trueNAS scale since I didn’t like their kubernetes use and truecharts, but as someone mentions they seem to switch to docker).
Otherwise I am happy as well, but I am far from a poweruser.
Generally, I think it is better to use a general server OS like Debian or Fedora instead of something specialized like Proxmox or Unraid. That way you can always choose the way you want to use your server instead of being channeled into running it a specific way (especially if you ever change your mind).
I run Debian with zfs. Really simple to set up and has been rock solid for it too. As far as I can tell all the issues I’ve had have been my fault.
ZFS looks like it uses a lot of RAM, but you can get away without it if you need too. It’s basically extra caching. I was thrilled to use it as an excuse to upgrade my ram instead.
Mdadm has a little more setup then zfs, as far as I’m concerned. You need to set your own scrubbing up whereas zfs schedules it’s own for you. You need to add monitoring stuff for both though.
I’ve considered looking into the various operating systems designsd for this, but they just don’t seem to be worth the effort of switching to me.
Proxmox ZFS with plenty of ram. (ZFS is ram heavy)