I want to create a copy of my NAS hosted media folder that is about 30TB. I have a bunch of 4-8TB local (most USB3, some SATA) disks, and I would like to copy these files to the various destinations maximizing space used and time required. Since I have a 10GBE network, I can read data far faster than I can write to any of the destinations, so multiple simultaneous file copies is required at the same time to maximize this activity. doing this manually is painful. trying to select the maximum number of files that can fit (but not go over) each destination is a pain. Any thoughts on a script or an app I can use to assist here is appreciated. I want to leave the files in their native format, so I am looking for a file copy, not block-based backup etc.

1 point

Assuming Windows, Robocopy has a multithread option that might be able to do multiple copies simultaneously. Haven’t used it. Not sure. Free. Worth looking into.

permalink
report
reply
1 point

If you’re only looking to copy every file to a new server, cut everything to a local directory that you know will fit. So rather than copying one and going to the next, split the mass into local directories first and check each one for size requirements. Once you know exactly which ones to copy and when, you could transfer them all at once. And one of the best ways to accomplish it is to copy your directory trees. Make sure you are only copying top directories though since they’re easiest. The directories that take up too much stance and have lots of folders are best to be put in one place. If you do it correctly, once you’ve finished, you can just merge all the folders back into place.

permalink
report
reply
1 point

trapexit/mergerfs: a featureful union filesystem

mergerfs is a union filesystem geared towards simplifying storage and management of files across numerous commodity storage devices. It is similar to mhddfs, unionfs, and aufs.

FEATURES

  • Configurable behaviors / file placement
  • Ability to add or remove filesystems at will
  • Resistance to individual filesystem failure
  • Support for extended attributes (xattrs)
  • Support for file attributes (chattr)
  • Runtime configurable (via xattrs)
  • Works with heterogeneous filesystem types
  • Moving of file when filesystem runs out of space while writing
  • Ignore read-only filesystems when creating files
  • Turn read-only files into symlinks to underlying file
  • Hard link copy-on-write / CoW
  • Support for POSIX ACLs
  • Misc other things

HOW IT WORKS mergerfs logically merges multiple paths together. Think a union of sets. The file/s or directory/s acted on or presented through mergerfs are based on the policy chosen for that particular action.

https://github.com/trapexit/mergerfs

[Users like you provide all of the content and decide, through voting, what’s good and what’s junk.]

permalink
report
reply
1 point

This is an interesting option but is it suitable for a one time copy?

permalink
report
parent
reply

Data Hoarder

!datahoarder@selfhosted.forum

Create post

We are digital librarians. Among us are represented the various reasons to keep data – legal requirements, competitive requirements, uncertainty of permanence of cloud services, distaste for transmitting your data externally (e.g. government or corporate espionage), cultural and familial archivists, internet collapse preppers, and people who do it themselves so they’re sure it’s done right. Everyone has their reasons for curating the data they have decided to keep (either forever or For A Damn Long Time ™ ). Along the way we have sought out like-minded individuals to exchange strategies, war stories, and cautionary tales of failures.

Community stats

  • 1

    Monthly active users

  • 913

    Posts

  • 4.6K

    Comments

Community moderators