Avatar

EtherManB

EtherMan@alien.top
Joined
0 posts • 12 comments
Direct message

Number 3 there is just… wrong. It shows selecting striped, bit striped is raid0, not 1, so if you selected that, that’s why. If you selected mirrored though you should indeed only see 20tb if each drive is 20. There should be no issue with having had storage spaces previously no.

permalink
report
parent
reply

Ok so, storage spaces isn’t the same as raid. A mirrored storage spaces pool, is not raid1. It’s very similar to it in that it’s a mirrored set, but it’s not the same thing. In a raid, because see, in storage spaces, you can have a mirrored set with 3 drives, and you’ll actually be able to about one and a half times one drive of data in that pool. This is because in storage spaces, it’s the DATA that is being duplicated, not the drives. So don’t confuse the concepts.

Now, as for why it shows that size, well, because you configured it to. Storage spaces completely decouple the pool size from actual currently usable space. You can create a pool of only 1 8gb drive, and yet say it’s a 1PB pool and it’ll happily do it for you. You’ll still only be able to actually store 8gb ofc, but the pool will report rhe 1PB of maximum space.

permalink
report
reply

Generally the enclosures are just that, enclosures that offer the connection. There are exceptions though where enclosure does something more. Some enclosures do encryption and some just use the same controller for single drive and multidrive and your one drive is actually set up as a 1 drive raid array in which case you may have data slightly shifted to accomodate the headers for that. You can then still recreate everything, but it’s a pain.

But as I said, generally they’re just providing the drive as is in which case there won’t be any issue.

permalink
report
reply

It really depends. For like a desktop, I’d avoid unless it was really cheap as I’d basically nullifies the value of all non standard parts and I’d include things like cpu if the motherboard is nonstandard. So value basically becomes only like drives and such.

For a server though, non standard is the norm and hete vendors even do stuff like vendorlocking instead which then IMO is a way bigger issue, especially since knowing beforehand if it does or not isn’t something anyone actually tells you before testing.

permalink
report
reply

You’re using experimental drivers and force unmounting… And you actually have the gall to then try to pin the blame for errors from that on ntfs? Just no.

ntfs does have many issues which is why ms is developing refs to replace it. But stability or corruption isn’t one of those issues. Ntfs is extremely solid in that regard due to the journaling.

Ntfs drivers in linux are however very buggy and generally considered experimental and that you should not write to ntfs drives if there’s any data you care about as it could easily destroy all data there.

If you need a common writable data area then use exfat, not ntfs.

permalink
report
reply

I have 7 dual cpu servers so I might be a bit biased in this regard. But worthwhile is like entirely subjective. Robust is also a weird wordchoice since there’s multiple conflicting interpretations on that.

For worthwhile… Well, it’s as I said subjective, but cost efficiency is very rarely the driving factor for homelabs.

For robust, do you mean robust in the sense of more powerful? Then ofc a dual slot server will be more robust but then you again are back to worthwhile. If you mean robust in terms of stability. Then absolutely not. Multi socket servers are much less stable than single core. Not unstable by any stretch, but not AS stable. Every additional component you add will always add complexity, and most importantly, additional points of possible failures. While at the same time, the system can’t survive if one CPU dies, hence stability of the system is lower the more CPU sockets you have. That’s why dual and quad are so popular even if 8 slot and more actually existing and is denser which is important in datacenters. But after quad slot, you start getting actual issues of system stability that it’s usually better to sacrifice some density and go for more servers instead and blade centers are usually not THAT much lower density.

permalink
report
reply

You could have tried on an older android device first. But for 8 gigs, I wouldn’t even have bothered. I’d just throw them in a bucket and be done with it.

permalink
report
parent
reply

Absolutely nothing has been as helpful in understanding how the internet works, as setting up and actually using BGP. An asn and a /40 for ipv6 can be had for almost nothing as a one time fee if you go through a LIR. Ipv4 is very expensive to buy but renting a /24 can be had for around $100 a month. And then you’re ready to start peering over tunnels or you can get VPSes that support it or ask your ISP (usually only on higher end business connections).

permalink
report
reply

If stability is what you’re after (both in terms of versioning and in the sense of as few unscheduled reboots as possible), then neither is a good option. Both update quite often and go with an “introduce feature now, worry about stability later” and end up having to constantly patch a bunch of stuff.

If you’re comfortable with a CLI, then I’d recommend Vyos and then going with the stable branch. It’s had 3 service patches since 1.3.0 released in 2021. The last being in june and before that, you have to go to september last year. Ofc, downside is that you’ll miss out on a lot of features. Like I don’t think stable has wireguard support yet, and not certain it will be ready for when 1.4 goes stable either (it’s currently in 1.4 rolling). You could implement some of it yourself because it’s built on Debian, but anything you do like that is tied to your current image. So if you upgrade, you have to do it again so I don’t recommend it.

Point is, if you need features, don’t, but if it’s the most stable you’re after, I can highly recommend at least having a look. Though I always recommend getting a proper router above any router os on amd64. You’ll get more out of it, cheaper, with less power consumption and lower latency.

permalink
report
reply

As in average? 1491W 30 day average according to the power meter. Fully loading everything is around 5kW iirc though that doesn’t really happen. Highest in last 30 days is 3774W peak and I think that’s when I accidentally shut down the UPS so everything was booting at the same time after. I don’t think I ever go over 3kW in normal circumstances.

Using 5 storage servers, 2 of which are storinators and 3 supermicros. And then two compute nodes which are Proliant DL380, g10 and a g11 that I just bought last week. Plus ofc some network gear which isn’t really anything too fancy, it’s just two routers, which while they do do PoE, I don’t use it so they’re not really high power or anything.

permalink
report
reply