You are viewing a single thread.
View all comments
49 points
*

Not really. It’s just a normal Zen 4 CPU with some server features like ECC memory support.

The biggest downfall of these chips is they have the same 28 PCI-E lanes as any consumer grade Zen 4 CPU. Quite the difference between that and the cheapest EPYC CPUs outside the 4000 series.

You’re going to run in to some serious I/O shortages if trying to fit a 10gbe card, an HBA card for storage, and a graphics card or two and some NVME drives.

permalink
report
reply
13 points

Not really. It’s just a normal Zen 4 CPU with some server features like ECC memory support.

I’m pretty sure all the Zen CPUs have supported ECC memory, ever since the first generation of them.

permalink
report
parent
reply
6 points

A lot of the Zen based APUs don’t support ECC. The next thing is if it supports registered or unregistered modules - everything up to threadripper is unregistered (though I think some of the pro parts are registered), Epycs are registered.

That makes a huge difference in how much RAM you can add, and how much you pay for it.

permalink
report
parent
reply
1 point

Consumer CPUs were lacking ECC reporting, so you never really knew if ECC was correcting errors or not.

permalink
report
parent
reply
2 points

No, even the earliest Ryzens support ECC reporting just fine, given the motherboard used supports it, which many boards do. Only the non-Pro APUs do not support ECC.

permalink
report
parent
reply
1 point

Not officially. Only Ryzen Pro have official (unregistered) ECC support and not many motherboards support it either. AFAIK Threadripper doesn’t officially support it either but I could be wrong.

permalink
report
parent
reply
2 points

Many boards support ECC even when not mentioned. Most ASUS and ASRock boards do for example.

permalink
report
parent
reply
1 point

The newest Threadripper 7000 series not only support ECC, but require it to work. It only accepts DDR5 registered ECC RAM.

permalink
report
parent
reply
3 points

Probably best to look at it as a competitor to a Xeon D system, rather than any full-size server.

We use a few of the Dell XR4000 at work (https://www.dell.com/en-us/shop/ipovw/poweredge-xr4510c), as they’re small, low power, and able to be mounted in a 2-post comms rack.

Our CPU of choice there is the Xeon D-2776NT (https://www.intel.com/content/www/us/en/products/sku/226239/intel-xeon-d2776nt-processor-25m-cache-up-to-3-20-ghz/specifications.html), which features 16 cores @ 2.1GHz, 32 PCIe 4.0 lanes, and is rated 117W.

The ostensibly top of this range 4584PX, also with 16 cores but at double the clock speed, 28 PCIe 5.0 lanes, and 120W seems like it would be a perfectly fine drop-in replacement for that.

(I will note there is one significant difference that the Xeon does come with a built-in NIC; in this case the 4-port 25Gb “E823-C”, saving you space and PCIe lanes in your system)

As more PCIe 5.0 expansion options land, I’d expect the need for large quantities of PCIe to diminish somewhat. A 100Gb NIC would only require a x4 port, and even a x8 HBA could push more than 15GB/s. Indeed, if you compare the total possible PCIe throughput of those CPUs, 32x 4.0 is ~63GB/s, while 28x 5.0 gets you ~110GB/s.

Unfortunately, we’re now at the mercy of what server designs these wind up in. I have to say though, I fully expect it is going to be smaller designs marketed as “edge” compute, like that Dell system.

permalink
report
parent
reply
2 points
*

We’ll see if they even make them. I can’t imagine there’s a huge customer base who really needs to cram all that I/o through only two or 4 lanes. Why make these ubiquitous cards more expensive if most of the customers buying them are not short PCI-E lanes? So far most making use of 5.0 are graphics and storage devices. I’ve not seen any hint of someone making a sas or 10 gbe card that uses 5.0 and fewer lanes. Most cards for sale today still use 3.0 let alone 4.0.

I might as well just drop the cash on a real EPYC CPU with 128 lanes if I’m only going to be able to buy cutting edge expansion cards that companies may or may not be motivated to make.

permalink
report
parent
reply
1 point
*

Agreed the PCI layout is bad. My problem is the x16 slot.

I would prefer 8 slots/onboard with PCIe5 x2 from CPU. From the chipset 2 slots of PCIe4 x2. This would probably adequate IO. Aiming for 2x25 Gbits performance.

permalink
report
parent
reply

Selfhosted

!selfhosted@lemmy.world

Create post

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.

Rules:

  1. Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it’s not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

Community stats

  • 4.7K

    Monthly active users

  • 3.2K

    Posts

  • 71K

    Comments