So in the last iteration of my home/community server (NAS + some common selfhosted services), I tried to virtualize/dockerize everything, and I pretty much succeeded… Except for everything to do with the NAS.

I’ve got a server running Debian, with a couple HDDs with ZFS on them. That server is also running a kvm/qemu hypervisor and Docker, so the vast majority of services are in there, but everything that needs to touch the ZFS pool, I had to set up on the server itself since neither Docker nor VMs can access (or more precisely, share) the hard drives directly. That is, rotating (GFS) backups, SMB shares, SMART reporting, overall monitoring - it’s actually a nontrivial amount of stuff that I don’t have stored anywhere but in the state of the system itself.

It all works fine, but I don’t like how scattered this is. My ideal is that I just have a set amount of places to worry about, e.g. 1) my VMs 2) docker compose files 3) docker volumes, and those three “are” my server. Right now, I’ve got this, and a bunch of hand-written systemd services, some sanoid config, smartd config, smbd config (user/pass, perms, etc)…

I don’t think it makes sense to have a VM that actually does the NASing, since then I’d have to… network mount the share on the host from the guest so that Docker can access it? I imagine there’d be some performance loss too.

I dunno, I didn’t come up with any solution that wouldn’t end up being all twisted and circular in the end, but I don’t think what I’ve got is the best possible solution either. Maybe I’m thinking wrong about this, and I should just set up Ansible so that my main server config is reproducible? Or have two physical machines to begin with?

I’m interested to hear what you think :)

1 point

I had a similar thought with my last upgrade. I went with proxmox hci and ceph storage. Somebody rise mentioned HCI…it’s been really great for me

I have 6 nodes at the present. I just bought them one at a time as the budget allowed. I started out with Lenovo tiny and now I use HP Elitedesk 800 mini g6. Same thing with storage. I have 2 storage pools, one of my ssd/nvme storage where I boot my OS vhds from and then my slow storage for media (hdds). If I need more ram or compute I just add another node to the cluster and spread out my VMs. If I need more storage, add it to one of the nodes in the cluster and ceph will redistribute the files evenly across the drives while online.

I map my 2 cephfs to all of my vms so the primary vhd has the os, Docker, etc…then if I need fast storage it’s already mapped and so is my slow storage. I map all my Docker containers storage to fast and slow cephfs. All of the data it’s stored at the hypervisor level with proxmox and all the vms, containers, and laptops/desktops access the cephfs.

You can run a windows vm, mount the two cephfs storages and run your favorite backup software and back all your data up off site.

Another benefit I was looking for is not having to rebuild this thing every few years when hardware ages out. Because hci uses nodes…I just had a new node with a newer used eBay computer and when the oldest one dies just move the vms…ceph cares for all the data migration.

Proxmox makes ceph and cephfs extremely easy to deploy. You can expose your file share via iscsi or cephs. Windows will need the ceph/dokan driver installed…or iscsi. That’s the only small inconvenience.

My two cents…hope he helps!

permalink
report
reply

Self-Hosted Main

!main@selfhosted.forum

Create post

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.

For Example

  • Service: Dropbox - Alternative: Nextcloud
  • Service: Google Reader - Alternative: Tiny Tiny RSS
  • Service: Blogger - Alternative: WordPress

We welcome posts that include suggestions for good self-hosted alternatives to popular online services, how they are better, or how they give back control of your data. Also include hints and tips for less technical readers.

Useful Lists

Community stats

  • 1

    Monthly active users

  • 1.8K

    Posts

  • 11K

    Comments

Community moderators