I can’t say for sure- but, there is a good chance I might have a problem.

The main picture attached to this post, is a pair of dual bifurcation cards, each with a pair of Samsung PM963 1T enterprise NVMes.

It is going into my r730XD. Which… is getting pretty full. This will fill up the last empty PCIe slots.

But, knock on wood, My r730XD supports bifurcation! LOTS of Bifurcation.

As a result, it now has more HDDs, and NVMes then I can count.

What’s the problem you ask? Well. That is just one of the many servers I have laying around here, all completely filled with NVMe and SATA SSDs…

Figured I would share. Seeing a bunch of SSDs is always a pretty sight.

And- as of two hours ago, my particular lemmy instance was migrated to these new NVMes completely transparently too.

92 points
*

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

Fewer Letters More Letters
NVMe Non-Volatile Memory Express interface for mass storage
PCIe Peripheral Component Interconnect Express
SATA Serial AT Attachment interface for mass storage
SSD Solid State Drive mass storage

4 acronyms in this thread; the most compressed thread commented on today has 3 acronyms.

[Thread #13 for this sub, first seen 8th Aug 2023, 21:55] [FAQ] [Full list] [Contact] [Source code]

permalink
report
reply
17 points

Good bot

permalink
report
parent
reply
10 points

Fantastic bot, honestly.

permalink
report
parent
reply
5 points

Good bot

permalink
report
parent
reply
23 points

I dont see any issues!

/me hides his 16 4tb 12g SAS drives…

permalink
report
reply
7 points

I think I’m at 7x 18tb drives. I’m slowly replacing all the smaller 8tb disks in my server. Only 5 more to go. After that it’s a new server with more bays and/or a jbod shelf.

permalink
report
parent
reply
1 point

That’s my next step. I have 8 8tb drives I need to start swapping, 2x512 NVMEs for system/app cache, and 1 2tb NVME for media cache.

permalink
report
parent
reply
1 point

the SAS drives are all SSD, I also have 8x 12tb in rust, and an LTO robot though its not currently in service.

permalink
report
parent
reply
10 points

If that’s a problem then I don’t want to be solved.

permalink
report
reply
2 points

Its only a problem when you get the electric bill! (Or the wife finds your ebay receipts)

permalink
report
parent
reply
4 points

I doubt these use much power compared to their spinning rust anticedents.

permalink
report
parent
reply
4 points

I meant my general electric bill. My server room averages 500-700watts.

permalink
report
parent
reply
2 points

Was curious how many watts this machine pulls? Also curious if you had ever filled it will spinning disks - would flash be less power hungry?

permalink
report
parent
reply
2 points

This one averages around 220-250.

It’s completely full of spinning disks. Flash would be less power usage, but, would cost significantly more, and would end up being drastically more expensive.

permalink
report
parent
reply
10 points

I dream of this kind of storage. I just added a second m.2 with a couple of TB on it and the space is lovely but I can already see I’ll fill it sooner than I’d like.

permalink
report
reply
6 points

I will say, it’s nice not having to nickel and dime my storage.

But, the way I have things configured, redundancy takes up a huge chunk of the overall storage.

I have around 10x 1T NVMe and SATA SSDs in a ceph cluster. 60% storage overhead there.

Four of those 8T disks are in a ZFS Striped Mirror / Raid 10. 50% storage overhead.

The 4x 970 evo / evo plus drives are also in a striped mirror ZFS pool. 50% overhead.

But, still PLENTY of usable storage, and- highly available at that!

permalink
report
parent
reply
1 point
*

Any reason you went with a striped mirror instead of raidz5/6?

permalink
report
parent
reply
3 points

The two ZFS pools are only 4 devices. One pool is spinning rust, the other is all NVMe.

I don’t use raid 5 for large disks, and instead go for raid6/z2. Given z2 and striped mirrors both have 50% overhead with only 4 disks- striped mirrors has the advantage of being much faster, double the IOPs, and faster rebuilds. For these particular pools, performance was more important than overall disk space.

However, before all of these disks were moved from TrueNAS to Unraid- there was a 8x8T Z2 pool, which worked exceptionally well.

permalink
report
parent
reply

Cripes I was stoked I managed to upgrade from 4x 2tb to 4x 4tb recently.

permalink
report
parent
reply
6 points

Is your problem that you are bragging about your drives?

permalink
report
reply

I’m out of room to add more drives!

Every one of my servers is basically completely full on disks. I need more servers.

permalink
report
parent
reply
2 points

I need some drives

permalink
report
parent
reply

Selfhosted

!selfhosted@lemmy.world

Create post

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.

Rules:

  1. Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it’s not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

Community stats

  • 5.2K

    Monthly active users

  • 3.7K

    Posts

  • 81K

    Comments