Now that you mention fucking incompetence, I need to verify my 3-2-1 backup strategy is correctly implemented. Thanks for the reminder, CloudNordic and AzeroCloud!
What’s the point of primary and secondary backups if they can be accessed with the same credentials on the same network
What’s the correct way to implement it so that it can still be automated? Credentials that can write new backups but not delete existing ones?
For an organisation hosting as many companies data as this one I’d expect automated tape at a minimum. Of course, if the attacker had the time to start messing with the tape that’s lost as well but it’s unlikely.
It depends what’s the pricing. For example ovh didn’t keep any extra backup when their datacenter took fire. But if a customer paid for backup, it was kept off-site and was recovered
It might be even pretending to be a big hosting company when they’re actually renting a dozen deds from a big player, much cheaper than maintaining a data center with 99.999% uptime
i use immutable objects on backblaze b2
from command line using their tool is something like b2 sync SOURCE BUCKET
and from the bucket setting disable object deletion
also borgbase allows this, backups can be created but deletions/overwrites are not permanent (unless you enabled them)
Fundamentally there’s no need for the user/account that saves the backup somewhere to be able to read let alone change/delete it.
So ideally you have “write-only” credentials that can only append/add new files.
How exactly that is implemented depends on the tech. S3 and S3 compatible systems can often be configured that data straight up can’t be deleted from a bucket at all.
A tape library that uses a robot arm https://youtu.be/sYgnCWOVysY?t=30s
Backups that are not connected to any device are not susceptible to being overwritten and encrypted by malware.
Here is an alternative Piped link(s): https://piped.video/sYgnCWOVysY
https://piped.video/sYgnCWOVysY
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source, check me out at GitHub.
A tape library that uses a robot arm
https://youtu.be/sYgnCWOVysY?t=30s
Or like that vault in Rogue One?
I don’t know if it is the „correct“ way but I do it the other way around. I have a server and a backup server. Server user can‘t even see backup server but packs a backup, backup server pulls the data with read only access, main server deletes backup, done.
That’s what you call an epic blunder.
I think they’re aware of that
Martin Haslund Johansson, the director of Azerocloud and CloudNordic, stated that he does not expect customers to be left with them when the recovery is finally completed.
The customers are already lost:
-
pay the expensive ransom, if the bad actor gives them the decryption key, customers are relieved but still pissed, will take the data and move to somewhere else with a big FO. Go out of business.
-
don’t pay the ransom, customers are pissed and move to somewhere else with a big FO. Go out of business.
Time and time again, data hosting providers are proving that local backups not connected to the internet are way better than storing in the cloud.
Any redundant backup strategy uses both. They both have inherent data loss risks. Local backups are great, but unless you store them in a bunker they are still at risk to fire, theft, vandalism and natural disasters. A good backup strategy stores copies in at least three locations. Local, off-site and the cloud. Off-site backups are backups you can physically retrieve. Like tapes stored in a vault in another city.
The 3-2-1 backup strategy: “Three copies are made of the data to be protected, the copies are stored on two different types of storage media and one copy of the data is sent off site.”
How would that work in practice? 1 medium offsite, and 2 mediums on-premises?
Other people’s computers. Never forget.