I have a proxmox+Debian+docker server and I’m looking to setup my backups so that they get backed up (DUH) on my Linux PC whenever it comes online on the local network.
I’m not sure if what’s best is backing up locally and having something else handling the copying, how to have those backup run only if they haven’t run in a while regardless of the availability of the PC, if it’s best to have the PC run the logic or to keep the control over it on the server.
Mostly I don’t want to waste space on my server because it’s limited…
I don’t know the what and I don’t know the how, currently, any input is appreciated.
I see syncthing being recommended, and like, it’s fine.
But keep in mind it’s NOT a backup tool, it’s a syncing tool.
If something happens to the data on your client, for example, it will happily sync and overwrite your Linux box’s copy with junk, or if you delete something, it’ll vanish in both places.
It’s not a replacement for recoverable-in-a-disaster backups, and you should make sure you’ve got a copy somewhere that isn’t subject to the client nuking it if something goes wrong.
Thanks for the heads up, yea I’m well aware of that, I use it to, well… sync, my phone pictures with my PC.
I use syncthing to copy important files between pc, phone and proxmox server. Syncthing can be set up with version control so it keeps old versions of files.
Only the proxmox server is properly backed up though. to a proxmox backup server running in a VM on said proxmox server. the encryptred backup files are copied to backblaze using rclone
Not sure if this is what you are looking for, but it works for me.
TLDR syncthing for copies between local machines, and proxmox backup server and backblaze for proper backups
I’m not the best person to query about backups, but in your situation I would do the following, assuming both server and desktop run on BTRFS:
Have a script on the desktop
that starts btrfs-receive
and then notifies the server
that it should start btrfs-send
.
You can also do rsync if BTRFS is not a thing you use, but It would either be expensive storage wise, or you would only ever have 1 backup - latest
.
As to how, I’d probably use zfs send | receive, any built-in functionality on a CoW filesystem, rsnapshot, rclone or just syncthing. As to when, I’d probably hack something with systemd triggers (e.g. on network connection, send all remaining incremental snapshots). But this would only be needed in some cases (e.g. not using syncthing ;p)
Since space is a major concern, maybe have a look at borg and possibly something like borgmatic on top for easier configuration. Borg does deduplicated backups, so you could do even hourly ones if you wanted without too much extra space depending on how many you want to keep. You’d need to run a borg server wherever you want to store your backups so it’s not a simple rsync over ssh situation but that’s the price you pay for the extra niceties.