I really want to run ceph because it fits a number of criteria I have: gradually adding storage, mismatched disks, fault tolerance, erasure encoding, encryption, support out-of-the-box from other software (like Incus).

But then I look at the hardware suggestions, and they seem like an up-front investment and ongoing cost to keep at least three machines evenly matched on RAM and physical storage. I also want more of a single-box NAS.

Would it be idiotic to put a ceph setup all on one machine? I could run three mons on it with separate physical device backing each so I don’t lose everything from a disk failure with those. I’m not too concerned about speed or network partitioning, this would be lukewarm storage for me.

11 points

Why then not just use ZFS or BTRFS? Way less overhead.

Ceph’s main advantage is the distribution of storage over multiple nodes, which you’re not planning on doing?

permalink
report
reply
5 points

I mean, yeah, I’d prefer ZFS but, unless I am missing something, it is a massive pain to add disks to an existing pool. You have to buy a new set of disks and create a new pool to transition from RAID z1 to z2. That’s basically the only reason it fails the criteria I have. I think I’d also prefer erasure encoding instead of z2, but it seems like regular scrub operations could keep it reliable.

BTRFS sounds like it has too many footguns for me, and its raid5/6 equivalents are “not for production at this time.”

permalink
report
parent
reply
2 points
*

LVM, mdraid, dm-crypt? LVM will let you make volumes and pools of basically any shape or size.

permalink
report
parent
reply
-1 points
*

Adding new disks to an existing ZFS pool is as easy as figuring out what new redundancy scheme you want, then adding them with that scheme to the pool. E.g. you have an existing pool with a RAIDz1 vdev with 3 4TB disks. You found some cheap recertified disks and want to expand with more redundancy to mitigate the risk. You buy 4 16TB disks, create a RAIDz2 vdev and add that to the existing pool. The pool grows in storage by whatever is the space available from the new vdev. Critically pools are JBODs of vdevs. You can add any number or type of vdevs to a pool. The redundancy is done at the vdev level. Thus you can have a pool with a mix of any RAIDzN and/or mirrors. You don’t create a new pool and transition to it. You add another vdev with whatever redundancy topology you want to the existing pool and keep writing data to it. You don’t even have to offline it. If you add a second RAIDz1 to an existing RAIDz1, you’d get similar redundancy to moving from RAIDz1 to RAIDz2.

Finally if you have some even stranger hardware lying around, you can combine it in appropriately sized volumes via LVM and give that to ZFS, as someone already suggested. I used to have a mirror with one real 8TB disk and one 8TB LVM volume consisting of 1TB, 3TB and 4TB disk. Worked like a charm.

permalink
report
parent
reply
3 points

You end up wasting a ton of space though because each vdev has its own parity drives.

permalink
report
parent
reply
3 points

“As easy as buying four same-sized disks all at once” is kinda missing the point.

How do I migrate data from the existing z1 to the z2? And then how can I re-add the disks that were in z1 after I have moved the data? Buy yet another disk and add a z2 vdev with my now 4 disks, I guess. Unless it is possible to format and add them to the new z2?

If the vdevs are not all the same redundancy level am I right that there’s no guarantee which level of redundancy any particular file is getting?

permalink
report
parent
reply
7 points

Found an interesting read regarding the matter here:
https://old.reddit.com/r/ceph/comments/mppwas/single_node_ceph_vs_zfsbtrfs/
Most seem to recommend going for ZFS instead if using a single machine but there is a person discussing his first hand experience with single node Ceph.

permalink
report
reply
3 points

This was really neat, kinda boils down to “you don’t want to deal with the complexity and it’s horrifically slow.”

permalink
report
parent
reply
2 points

Neat! Thank you

permalink
report
parent
reply
3 points

Create 3 VM’s and pass-through disks to each VM. Boom ceph cluster on a single computer.

ZFS/BRTFS might still be better, but if you really want Ceph this should work and provide expansion and redundancy at a block device level, though you wont have any hardware redundancy regarding power/nodes.

permalink
report
reply
3 points

Since you are talking mismatched disks, I have gone to unraid after running a ceph cluster. I found it easy to keep adding and upgrading disks in unraid where it made more sense than maintaining or adding nodes. While I like the concept of being able to add nodes for very large storage arrays. My current unraid server is 180tb.

It is super simple to add/upgrade the storage one disk at a time.

permalink
report
reply
0 points

Oh, neat, I’ll have to look into that more. It’s able to have some redundancy and does some sort of rebalancing on disk failures?

permalink
report
parent
reply
1 point

It has parity disks, which always need to be the largest disks in your array. You can run with either a single one double parity disk.

It seems to work well, as that’s how I’ve had to replace a dozen disks in the last year upgrading from 8tb disks to 18 or 22tb disks.

permalink
report
parent
reply
2 points
*

Ceph is a huge amount of overhead, both engineering and compute resources, for this usecase.

permalink
report
reply

Selfhosted

!selfhosted@lemmy.world

Create post

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.

Rules:

  1. Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it’s not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

Community stats

  • 3.4K

    Monthly active users

  • 3.4K

    Posts

  • 77K

    Comments