As i’m at a point of increasing my actual 4 server count, how should I handle storing the OS of each server?

Problem: I have no money and I’m looking for a way to buy more servers without buying and taking care of new boot ssds.

The actual setup is that each server has their own 120/240gb ssd to boot from, and one of my servers is a NAS.

at first I thought of PXE persistent boot, but how would I assign each machine their own image is one of the problems…

I’ve found this post talking about Disk-less persistent PXE, but it’s 5 years old, it talks about SAN booting, but most people I’ve seen in this sub are against fiber-channel protocol, probably there’s a better way?

Without mentioning speed requirements (like a full-flash NAS or 10+gbit), Is it possible to add more servers, without purchasing a dedicated boot device for each one?

1 point

To avoid a single point of failure for each new server, I would add $15 Inland SSD per server to zero $ budget.

permalink
report
reply
1 point

I use NFS roots for my hypervisors, and iSCSI for the VM storage. I previously didn’t have iSCSI in the mix and was just using qcow2 files on the NFS share, but that had some major performance problems when there was a lot of concurrent access to the share.

The hypervisors use iPXE to boot (mostly; one of them has gPXE on the NIC, so I didn’t need to have it boot to iPXE before the NFS boot).

In the past I have also use a purely iSCSI environment with the hypervisors using iPXE to boot from iSCSI. I moved away from it because it’s easier to maintain the single NFS root for all the hypervisors for updates and the like.

permalink
report
reply
1 point

How? Are you loading a configuration in a device plugged in each hypervisor server? Any project i should read further?

permalink
report
parent
reply
1 point

The servers use their built-in NIC’s PXE to load iPXE (I still haven’t yet figured out how to flash iPXE to a NIC), and then iPXE loads a boot script that boots from NFS.

Here is the most up-to-date version of the guide I used to learn how to NFS boot: https://www.server-world.info/en/note?os=CentOS_Stream_9&p=pxe&f=5 - this guide is for CentOS, so you will probably need to do a little more digging to find one for Debian (which is what Proxmox is built on).

iPXE is the another component: https://ipxe.org/

It’s worth pointing out that this was a steep learning curve for me, but I found it super worth it in the end. I have a pair of redundant identical servers that act as the “core” of my homelab, and everything else stores its shit on them.

permalink
report
parent
reply
1 point

I am an advocate of FC Protocol. I love it. Much more then iSCSI. But I hate SAN Booting. It is a pain in the ass. And you need a Server to host the images. You have to build up a SAN Infrastructure. I guess 2 boot SSD’s are cheapter. 2x 64GB NVMe SSD with a PCIe Card or 2x 64GB SATA SSD’s cost next to nothing.

permalink
report
reply
1 point

With the price of ssds what they are now for a small 100gb why bother with the additional setup and potentially failure points.

I’ve run esxi through network and even that wasn’t fun with longish boot times. I certainly wouldn’t like to run proxmox that way. These days there’s really no reason not to have “some” fast direct storage in each server even if it’s just used mainly as cache.

What you’re looking for is possible but to me the saving of $20 ish per machine just isn’t worth introducing more headaches.

permalink
report
reply
1 point

You will need to PXE boot into a RAM disk and then use iSCSI/NFS/CEPH/etc for persistent storage.

permalink
report
reply

Homelab

!homelab@selfhosted.forum

Create post

Rules

  • Be Civil.
  • Post about your homelab, discussion of your homelab, questions you may have, or general discussion about transition your skill from the homelab to the workplace.
  • No memes or potato images.
  • We love detailed homelab builds, especially network diagrams!
  • Report any posts that you feel should be brought to our attention.
  • Please no shitposting or blogspam.
  • No Referral Linking.
  • Keep piracy discussion off of this community

Community stats

  • 9

    Monthly active users

  • 1.4K

    Posts

  • 6K

    Comments