I am planning to build a multipurpose home server. It will be a NAS, virtualization host, and have the typical selfhosted services. I want all of these services to have high uptime and be protected from power surges/balckouts, so I will put my server on a UPS.

I also want to run an LLM server on this machine, so I plan to add one or more GPUs and pass them through to a VM. I do not care about high uptime on the LLM server. However, this of course means that I will need a more powerful UPS, which I do not have the space for.

My plan is to get a second power supply to power only the GPUs. I do not want to put this PSU on the UPS. I will turn on the second PSU via an Add2PSU.

In the event of a blackout, this means that the base system will get full power and the GPUs will get power via the PCIe slot, but they will lose the power from the dedicated power plug.

Obviously this will slow down or kill the LLM server, but will this have an effect on the rest of the system?

You are viewing a single thread.
View all comments
17 points

The amount of absolutely wrong answers in here is astounding.

NO. PCIE is not plug and play. Moreover, having a dead PCIE device that was previously accepting information, and then suddenly stops, is almost guaranteed to cause a kernel panic on any OS because of an overflowing bus of tons of data that can’t just sit there waiting. It’s a house of cards at that point. It’s also going to possibly harm the physical machine when the power comes back on due to a sudden influx of power from an outside PSU powering up a device not meant for such things.

Why wouldn’t instead think of maybe NOT running an insane workload on such a machine with insanely power hungry GPUs, and maybe go for an AMD APU instead? Then you’ll get all the things you want.

permalink
report
reply
16 points
*

PCIe is absolutely plug and play. Cards have been PnP since the ISA era. You probably meant hot-plug, but it’s hot-pluggable too: https://lwn.net/Articles/767885/

Any buffered data will sit in the buffer, and eventually be dropped. Any data sent to the buffer while the buffer is full will be dropped. I’m not intimately familiar with communicating with GPUs, but I imagine the only buffers are in the GPU driver (which would either handle the removal or crash) or in the application (which would probably not handle the removal and just crash). Buffering is not really where I would expect to see a problem.

That said, a GPU disappearing unexpectedly will probably crash your program, if not your whole OS. Physical damage is unlikely, though I definitely wouldn’t recommend connecting two PSUs to one system due to the potential for unexpected… well, potential. Inrush current wouldn’t really be my concern, since it would be pulling from the external PSU which should have plenty of capacity (and over-current protection too, I would hope). And it’s mostly a concern for AC systems, rarely for DC.

permalink
report
parent
reply
1 point

What’s wrong with 2 PSUs if both of them are connected to the same ground? I thought multiple PSUs is common in the server space too.

permalink
report
parent
reply
3 points

Server PSUs are designed to be identical and work on parallel (though depending on platform, they can be configured as primary/hot spare, too). I’d be concerned about potential difference in power, especially with two non-matching PSUs. It would probably be fine, but not probably enough for me to trust my stuff to it. They’re just not designed or tested to operate like that, so they may behave unexpectedly.

permalink
report
parent
reply
1 point
Deleted by creator
permalink
report
parent
reply
-3 points

You are mistaking “plug and play” with “hot swap/plug CAPABLE”. The spec allows for specifically designed hardware to come and go, like Express card, Thunderbolt, or USB4 lane-assigned devices, for example. That’s a feature built for a specific type of hardware to tolerate things like accepting current, or having a carrier chip at least communicating with the PCIE bridge that designates it’s current status. Almost all of these types of devices are not only designed for this, they are powered by the hardware they are plugged into, allowing that power to be negotiated and controlled by the bridge.

NOT like a giant GPU that requires it’s own power supply current and ground.

But hey, you read it on the Internet and seem to think it’s possible. Go ahead and try it out with your hardware and see what happens.

permalink
report
parent
reply
4 points

Dude… you’re the one that said PCIE isn’t plug and play, which is incorrect. Plug and play simply means not having to manually assign IRQ/DMA/etc before using the peripheral, instead being handled automatically by the system/OS, as well as having peripherals identify themselves allowing the OS to automatically assign drivers. PCIE is fully plug-and-play compatible via ACPI, and hot swapping is supported by the protocol, if the peripheral also supports it.

permalink
report
parent
reply
1 point

Right, it requires device support. And most GPUs won’t support it. But it’s by no means impossible.

I’ve got some junk hardware at work, I’ll try next time I’m in and let you know.

permalink
report
parent
reply
-3 points

You have multiple accounts, and are sadly so consumed with Internet points, you used both of them to downvote when you’re won’t. You’re pathetic. Get a hobby. Maybe learning about hardware!

permalink
report
parent
reply
5 points

I do something similar to op, however, running llms is what finally convinced me to switch over to kubernetes for these exact reasons, I needed the ability to have gpus running on separate nodes that then I could toggle on or off. Power concerns here are real, the only real solution is to separate your storage and your compute nodes.

What OP is suggesting is not only not going to work, and cause damage probably to the motherboard and gpus, but I would assume is also a pretty large fire hazard. One GPU takes in an insane amount of power, two gpus is not something to sneeze at. It’s worth the investment of getting a very good power supply and not cheaping out on any components.

permalink
report
parent
reply
1 point

You’re forgetting that the card would still be receiving it’s 75W of power from the PCIe bus. This is what powers cards that don’t have extra power connectors.

permalink
report
parent
reply

Selfhosted

!selfhosted@lemmy.world

Create post

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.

Rules:

  1. Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it’s not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

Community stats

  • 5K

    Monthly active users

  • 3.6K

    Posts

  • 81K

    Comments