I see many posts asking about what other lemmings are hosting, but I’m curious about your backups.
I’m using duplicity myself, but I’m considering switching to borgbackup when 2.0 is stable. I’ve had some problems with duplicity. Mainly the initial sync took incredibly long and once a few directories got corrupted (could not get decrypted by gpg anymore).
I run a daily incremental backup and send the encrypted diffs to a cloud storage box. I also use SyncThing to share some files between my phone and other devices, so those get picked up by duplicity on those devices.
Restic using resticprofile for scheduling and configuring it. I do frequent backups to my NAS and have a second schedule that pushes to Backblaze B2.
Another +1 for restic. To simplify the backup I am however using https://autorestic.vercel.app/, which is triggered from systemd timers for automated backups.
3 2 1 with Restic and B2
What’s my what lmao?
I’m paying Google for their enterprise gSuite which is still “unlimited”, and using rclone’s encrypted drive target to back up everything. Have a couple of scripts that make tarballs of each service’s files, and do a full backup daily.
It’s probably excessive, but nobody was ever mad about the fact they had too many backups if they needed them, so whatever.
I use syncthing to sync files between phone, pc and server.
The server runs proxmox, with a proxmox backup server in VM. A raspberry pi pulls the backups to an usb ssd, and also rclone them to backblaze.
Syncthing is nice. I don’t backup my pc, as it is done by the server. Reinstalling the pc requires almost no preparation, just set up syncthing again
Irreplaceable media: NAS->Back blaze NAS->JBOD via duplicacy for versioning
Large ISOs that can be downloaded again, NAS -> JBOD and or NAS -> offline disks.
Stuff that’s critical leaves the house, stuff that would just cost me a hell of a lot of personal time to rebuild just gets a copy or two.
I back up everything to my home server… then I run out of money and cross my fingers that it doesn’t fail.
Honestly though my important data is backed up on a couple of places, including a cloud service. 90% of my data is replaceable, so the 10% is easy to keep safe.
I run all of my services in containers, and intentionally leave my Docker host as barebones as possible so that it’s disposable (I don’t backup anything aside from data to do with the services themselves, the host can be launched into the sun without any backups and it wouldn’t matter). I like to keep things simple yet practical, so I just run a nightly cron job that spins down all my stacks, creates archives of everything as-is at that time, and uploads them to Wasabi, AWS S3, and Backblaze B2. Then everything just spins back up, rinse and repeat the next night. I use lifecycle policies to keep the last 90 days worth of backups.
I like the cut of your jib!
Any details on the scripts?
All devices backup to my NAS either in realtime or at short intervals throughout the day. I use recycling bins for easy restores for accidentally deleted files.
My NAS is set up on a RAID for drive redundancy (Synology RAID) and does regular backups to the cloud for active files.
Once a day I do a hyperbackup to an external HDD.
Once a month I backup to an external drive that lives offsite.
Backups to these external HDDs have versioning, so I can restore files from multiple months ago, if needed.
The biggest challenge is that as my NAS grows, it costs significantly more to expand my backups space. Cloud storage and new external drives aren’t cheap. If I had an easy way to keep a separate NAS offsite, that would considerably reduce ongoing costs.
Depending on how much storage do you need (>30 TB?), it may be cheaper to use a colocation service for a server as an offsite backup instead of cloud storage. It’s not as safe, but it can be quite cheaper, especially if for some reason you’re forced to rapidly download a lot of your data from the cloud backup. (Backblaze b2 costs $0.01/gb downloaded).
Do you have an example or website I could look at for this ‘colocation service’?
Currently using idrive as the cloud provider, which is free until the end of the year, but I’m not locked into their service. Cloud backups really only see more active files (<7TB), and the unchanging stuff like my movie or music catalogue seems reasonably safe on offsite HDD backups, so I don’t have to pay just to keep those somewhere else.
First I’d like to apologize because I originally wrote less than 30TB instead of more than 30TB, I’ve changed that in the post.
A colocation is a data center where you pay a monthly price and they’ll house your server (electricity and internet bandwidth is usually included unless with certain limits and if you need more you can always pay extra).
Here’s an example. It’s usually around $99/99€ per 1U server. If you live in/near a big city there’s probably at least a data center that offers colocation services.
But as I said, it’s only worth it if you need a lot of storage or if you move files around a lot, because bandwidth charges when using object storage tend to be quite high.
For <7 TB it isn’t worth it, but maybe in the future.
Thanks for the info. Something to consider as my needs grow 👍
I have an external hard drive that I keep in the car. I bring it in once a month and sync it with the server. The data partition is encrypted so that even if it were to get stolen, the data itself is safe.
I have a similar 321 strategy without using someone else’s server and needing to traverse the internet. I keep my drive in the pool shed, since if my house was to blow up or get robbed, the shed would probably be fine.