I just bought a microcenter brand 1 TB SSD for less than $50. Can a HDD compete with that on price and read/write speed?
Also recently bought a gaming PC that does not have a HD, only a 1 TB SSD.
I think HDDs day as boot drives is over. Unless they get a lot faster which I think is unlikely.
HDDs are certainly useful for larger amounts of storage, though. Self hosting, data centers, etc.
ETA: I don’t think any of the responses read my entire comment. See the LAST SENTENCE in particular, friends.
The last set of NAS drives I bought for my home server were ~$120 for 8TB, and while random access may not quite measure up, I’d put them up against your $50 Inland white-label drive for sustained R/W any day of the week, especially once the SSD’s write cache is saturated. That’s not even comparing like-for-like – consumer hard drives using SMR are quite a bit cheaper than the NAS drives I bought, and enterprise-grade Flash storage costs 2-4 times as much as low-end consumer flash.
There’s absolutely still a case to be made for mechanical drives in near-line storage, and that’s not likely to change for quite a few years yet.
My NAS device has 80TB of usable space (6x16TB, raid5). Equivalent would’ve cost tens of thousands of dollars in drives alone.
Once 16TB SSDs are even available I will probably start migrating them in, but for now mechanical drives it is.
A 4TB SATA SSD is 200 EUR. For 96 TB you would need 24 (probably less for 80TB usable). It would cost between 4k and 4.5k. Prices are going down fast.
And a way to have that many drives connected at once, which means more cost.
If you’re able to get enterprise ssds, you could get 16tb ssds… But no clue what minimum order sizes are like for that kind of thing. But of you wanted to use 16tb ssds instead of buying a house 100% down payment, that’s an option probably.
Person with vested interest in X says X will continue to proliferate. More at 11
Also, for some extra context:
Besides speed, the main problem of spinning rust hard drive ultimately comes down to reliability: you have to baby them, one bad shock and the magnetic needle scratches the platter, then all your data is gone without any way to recover them.
Datacenters usually have redundancies just in case, but being that NAND flash is dirt cheap nowadays, the flaws of spinning rust hard drives are too great to overcome.
Considering that the needle hovers like mere nanometers over the disk, something as simple as loud noises would cause enough vibration to affect disk performance, so the force needed to permanently damage a disk is really, really small.
I always love seeing that video in the wild, but vibrations affecting performance and vibrations causing damage are two entirely different things, particularly because that performance drop might be the needle parking itself to avoid actual damage.
As a personal anecdote, I’ve once installed Windows on a laptop while sitting in the back row of a car driving on not-so-great roads and while I wouldn’t recommend it, the laptop was still good years later.
Speaking of, the entire concept of laptops wouldn’t have worked before SSDs became mainstream if HDDs were actually that fragile.
I mean, with stuff like ZFS, it’s a little hard to justify the outlay for all solid-state disk storage when I can build out a large storage array using HDD’s and use one mid-size SSD for ZIL and then L2ARC to provide read/write speedups. Who actually cares what the underlying storage mechanism is as long as the dataset is backed up and the performance is good?
As a newb I hope one day in my journey, I can look back at this and say “I finally understand this.” Til then thank you, magic man
There is a lot of power to waste for the savings you made, when not buying expensive SSDs (20€ a year is not much). Where we use HDDs, we don’t care about noise. Durability? We use huge RAID systems with lots of redundancy.
I personally like to swap new drives after 5 years to avoid failures. So when you find a 16 TB SSD for 350€, you send me a message.
My 4 bay HDD NAS uses around 45W, 50W with some light load, 70W spinning up. That’s about 1kWh per day, or 150 EUR per year.
I use it in my room, so I very much care about noise.
More durability = less redundancy (less cost) + less frequent swaps (less cost). My anecdotal evidence is 1 failed SSD in 15 years (160GB Intel, basically first Gen). Every other SSD is still working. I have a drawer full of failed HDDs.
Plus more performance.
I admin a datacenter and hard drives are never going anywhere. Same with tapes.
Microsoft has already proven that underwater data centers are viable - they just need to scale up now
Project Natick Phase 2 - https://natick.research.microsoft.com/
I work tech support for a NAS company and the ratio of HDDs to SSDs is roughly 85-15. Sometimes people use SSDs for stuff that requires low latency, but most commonly they’re used as a cache for HDDs in my experience.
Not much point in using SSDs in a NAS if it’s there just for holding your files
If the NAS supports tiered storage, you benefit from high I/O performance for things like video editing.
My home storage is a NAS connected over 10GbE, I never bothered trying to play games off of it, but I’ll bet they’d run great. Read & write over the network at 10 gigabit is faster on a machine with (separate) RAID arrays of SSDs and HDDs than internal SATA3 connectivity which is kind of bonkers for a home user. Plus that has virtual machines and cloud backups running on the NAS side.
Lower power usage and smaller and maaaaaaaaybe better reliability. I’d probably do it if it was cost competitive… but it’s not yet.
Work for one of the largest and we literally finished phasing out tape this year lol.