My favorite frontend for this is Pika backup.
tl;dr Duplicity does full or incremental backups, BorgBackup only does full backups but with deduplication.
After the first backup with Duplicity, you can choose to do an incremental backup which will only store the data that has changed since the last backup. This saves time and disk space but you have to do slow full backups regularly. See question 3 of the FAQ.
BorgBackup alway does a full backup. But it divides all data into chunks or blocks (don’t know what they call it exactly at the moment). It then hashes those chunks and stores them in a content-addressed storage layer. So it basically works like Git under the hood (plus encryption). If a chunk doesn’t change between backups it‘s already there and does not have to be stored again. A backup is always a full index of the data.
With today‘s fast processors and hashing algorithms, a backup with Borg should be just as fast as an incremental backup with Duplicity. If you ask me deduplicated backups are just plain superior.
Another tool that works like BorgBackup is Restic, which I prefer. Both are good choices that I would trust with my data.
I’ve tried to roll out Borg a few times over the years and always hit a roadblock for one reason or another. Perhaps it was the lack of any front end at all and Borg just chilling in the background, but the documentation was never really clear on what the next steps were.
All the more knowledgeable people here debating different modern backup solutions, and I’m up here using tar
up until yesterday, all happy because borg is a step up from that 😃
Just for anyone that wants to try Borg but wants a good ui, borgbase made “vorta” that is a great ui. Plus Borg has 2.0 around the corner.