Hello Comrades,

Thanks for all your advice about setting up Linux. It was a success. The problem is that I’m now I’m intrigued and I’d like to play around a bit more.

I’m thinking of building a cheap-ish computer but I have a few questions. I’ll split them into separate posts to make things easier. Note: I won’t be installing anything that I can’t get to work on Linux.

Question about storage and swap memory.

I plan to install an SSD of maybe 128–256GB for the system files and a larger HDD for storage. I would partition the SSD so that I could install a few different distros without losing any installation. This way I can commit to some longer experiments before deciding which distro to use.

The question is: should I have the swap partition on the SSD (with the OS partition) or (separately) on the HDD?

And if I install multiple distros, do I need a different swap partition for each one? For example, if I install 16GB RAM, do I need a 16GB partition for, say, Mint, Debian, and Ubuntu? Or can I let them ‘share’ the swap partition?

Are there any additional security/privacy risks of installing more than one distro on the same SSD card?

You are viewing a single thread.
View all comments View context
2 points

Ah, sorry for the slow replies, I didn’t notice my inbox.

Generally the second one, but it’s kinda both. Modern flash memory works by storing an electric charge in a memory cell and the number of writes per cell is limited. To make memory more dense (and thus cheaper), modern designs store multiple bits in one single cell (called MLC flash, usually TLC flash currently with three bits per cell). This means that changing a single bit usually leads to the whole cell being read and written again. All of this is abstracted away by the memory controllers on modern SSDs. In general the controller will handle stuff like defragmentation and wear leveling on the fly and none of this is transparent to the user. So in theory even a once written file could be moved to a different part of the memory chips when the controller sees fit. In general the controller tries to keep cell writes to a minimum and tries to spread them out evenly over the whole drive and when cells start failing there are usually spares that are not directly visible to the user (over-provisioning).

The only thing you can really influence is the amount of “locked in” cells, aka how much of the SSD is filled up with data. As long as there is plenty of room to spread out writes, SSDs will last ages. But if you have a 120 GB filled with 119 GB of data and you write a couple of hundred MB a day… the controller will struggle to keep the overall drive healthy.

So I’d say if it’s not some very heavy usage (like 24/7 writing server logs to disk on a production server) don’t worry about it. I’ve never had an SSD fail, I usually replace them with larger/newer ones before that point. I have an 128 GB SSD from 2011 that got used in my daily system for like 7 years in suboptimal conditions (drive was pretty full most of the time) that’s still perfectly healthy. Just buy a 1TB drive and forget about it.

On the topic of backups: It’s always good to keep backups but SSDs are not the best medium for that. Traditional HDDs in some reduntant/resilient configuration like a RAID array are way better for that.

permalink
report
parent
reply

Linux for Leftists

!leftistunix@lemmygrad.ml

Create post

A Community for all leftists wanting to join and being part of a community that talks about Linux, Unix and the Free Software Community

Community stats

  • 26

    Monthly active users

  • 110

    Posts

  • 798

    Comments

Community moderators