SamSausagesB
I do this at the file system level, not the file level, using zfs.
Unless the container has a database, I use zfs snapshots. If it has a database, my script dumps the database first and then does a ZFS snapshot. Then that snapshot is sent via sanoid to a zfs disk that is in a different backup pool.
This is a block level backup, so it only backs up the actual data blocks that changed.
No, I like pfsense because it has less frequent updates and is better documented.
Here is one of the better guides that helps you config much of what you are talking about:
https://nguvu.org/pfsense/pfsense-baseline-setup/
Plus, opensense gets most of their code from the work done by pfsense, and often have to wait on them to push the code. Just look at what happened with TLS 1.3
It’s not cheap to operate a business in Canada
Yes and no.
Yes if you have the resources to monitor and update. Companies have entire teams dedicated to this.
No if you don’t have the resources/time to keep up with it regularly.
IMO, no need to take this risk when you have services like Tailscale available today.
What has prompted your interest in data hoarding?
Censorship and Memory-holing
I can’t tell you how many channels have disappeared and been memory-holed. Especially since censorship went into overdrive around 2019.
Data hoarders can show you how the world was before all that happened.
Unraid and Proxmox
self hosted git repository.
I setup gitea on my server and use it to track version changes of all my scripts.
And I use a combination of the wiki and .md (readme) files for howto’s and any inventory I’m keeping, like IP addresses, CPU assignments etc.
But mainly it’s all in .md formatted with markdown.
Can be safer. Can be worse.
A poorly configured self hosted vaultwarden can be a major security issue.
A properly configured one is arguable safer than hosting with a 3rd party. Lastpass taught me that one.
If you configure it to where it’s not exposed to the web, and only accessed through a VPN, like Tailscale. It can be quite robust.