Part of any good backup strategy is to ensure a copy of your backup is stored in a secondary location, so that if there is a major outage (datacenter failure, office burns down, whatever) there is a copy of your data stored elsewhere. After all, what use is a backup if it gets destroyed at the same time as the original?
A large enterprise may do cross-datacenter backups, or stream them to a “bunker”; smaller business may physically transfer media to a storage location (in my first job mumble years ago, the finance director would take the weekly full-backup tapes to her house so we had at most 1 week of data loss).
With the internet now being so fast (my home connection is faster than my LAN was 10 years ago!) offsite storage in the cloud is now a feasible option, and a number of businesses have started up around this concept allowing anyone, even home users, to have an offsite backup (e.g. Crashplan, Carbonite, Backblaze). Many of these solutions require an agent to be installed on your machine and it will “live” detect changes and upload them to the cloud storage, which is nice.
These services are priced in a number of ways (e.g. fixed price, or per Terabyte). They may even allow you to “bring your own storage” (e.g. backup to an Amazon S3 pool you control). You should also be aware of network transfer charges.
The DIY solution
I’m an old-school Unix guy and do old-school backup solutions. So each
server has a nightly job that does a dump (level 0 on Sunday, level 1 on
Monday, etc). For my remote servers I rsync
the data back to my home
machine. I have an LVM on here that stores all the backups for the past
2 weeks. This works fine for data restores, but would be vulnerable
to my house burning down (for example); backups of my home machine and
the original data are in the same location.
Until recently I had a physical server colocated in Canada and would copy my files to that machine. However I cancelled this service (it was just too expensive and I didn’t need a full server; my existing virtual machines did the job I needed) and started looking for an alternative way for handling offsite files.
I had spotted that Amazon now have a Cloud Drive. What makes this service attractive is that the $60/year option is unlimited. I’ve read that people have stored over 100Tbytes of data here. Options such as Google Drive charge $20/100Gb/Yr (or $100/Tb/yr) so if you store over 300Gb of data then Amazon is cheaper. Similarly Dropbox is $100/Tb/yr. This makes Amazon nice from a pricing perspective, but it’s not necessarily so well supported as other options (eg Backblaze have dedicated customer support for their backup/restore solutions and can even ship you a drive with your data on… for a fee, of course!).
Now Amazon Cloud Drive (ACD) has agents for Windows and Mac but, of course, not for Linux or Unix in general. Fortunately people have coded to the ACD API, and provided solutions such as acd_cli or rclone.
rclone
looked attactive to me; it can act like rsync
but for Amazon
(and other) cloud drives.
In addition it can also encrypt the data sent to the cloud which,
to me, is pretty important; I don’t want to become a statistic like
Capgemini and expose my backups to the world!
rclone configuration
Once I’d signed up for ACD (oh, hey, 3 month free trial so if this doesn’t
work out then I can cancel it!) it was pretty simple to configure rclone
to talk to Amazon. The rclone config
comamnd steps you through the
process (even on headless servers) and builds the config file:
[Amazon]
type = amazon cloud drive
client_id =
client_secret =
token = {"access_token":"***","token_type":"bearer","refresh_token":"***", "expiry":"***"}
And that’s all there is to it. So now we can work directly from the comamnd line:
$ rclone mkdir Amazon:foo
$ rclone copy ~/.profile Amazon:foo
2017/03/12 16:08:25 amazon drive root 'foo': Waiting for checks to finish
2017/03/12 16:08:25 amazon drive root 'foo': Waiting for transfers to finish
2017/03/12 16:08:26
Transferred: 1.714 kBytes (1.715 kBytes/s)
Errors: 0
Checks: 0
Transferred: 1
Elapsed time: 900ms
$ rclone ls Amazon:foo
1755 .profile
$ rclone delete Amazon:foo/.profile
2017/03/12 16:08:41 Waiting for deletions to finish
$ rclone rmdir Amazon:foo
This is pretty easy! And, of course, the files are visible from a web browser.
Encryption
So far this data isn’t encrypted, so now we need to create an encrypted area.
The nice part of this is that you pick a subdirectory and specify that for
encryption. Again rclone config
can build this for you; the resulting
config section looks like this:
[backups]
type = crypt
remote = Amazon:Backups
filename_encryption = off
password = **THATWOULDBETELLING*
password2 = **THATWOULDBETELLING*
The remote
line tells rclone
to use the Backups
directory on the
previously created Amazon
setup.
An interesting line is filename_encryption
. This can be off
or standard
.
In off
mode the filename appears unchanged (well, .bin
gets added to
the end). In standard
mode the filename also gets encrypted. For my
backups I didn’t care if people saw linode/20170312.root.0.gz.bin
as
the filename, and it makes it easy to see what is there via the web.
I can now do rclone copy /BACKUPS backups:
and the program will recursively
copy all my data up.
My home connection is a FIOS 75Mbit/s link. Uploading averaged around 60Mbit/s, so it took a while to upload 320Gb of data, but I left it overnight and it worked.
So now let’s see how this looks:
$ rclone ls Amazon:Backups/linode/linode | sort -k+2 | head -3
6358 20170223._datadisk.4.gz.bin
80846055 20170223._news.4.gz.bin
4214 20170223.log.bin
$ rclone ls backups:linode/linode | sort -k+2 | head -3
6310 20170223._datadisk.4.gz
80826279 20170223._news.4.gz
4166 20170223.log
We can see the filesizes are larger when looking at the raw data; that’s part of the encryption overhead.
Verify
Of course a backup isn’t any good if you can’t restore from it, so I also tried downloading all the data as well. I created another temporary LVM and then restored all the data backup
$ rclone copy backups: .
[...]
Transferred: 341.269 GBytes (9.703 MBytes/s)
Errors: 0
Checks: 0
Transferred: 706
Elapsed time: 10h0m17.2s
That looks good!
$ diff -r . /BACKUPS
Only in /BACKUP/: lost+found
Only in /BACKUP/penfold: 20170226._FastData.0.gz
That’s interesting… we learn two things:
- Empty directories are not copied over
- Amazon has a maximum filesystem limit; currently this is 50Gb. Any file
larger than this won’t be copied.
The second entry has meant I’ve modified my backup software slightly; it now splits the data files every 10Gb to ensure they’re small enough.
But other than that, the data I got back was identical to the data I sent up.
Worst case scenario
When I wrote, earlier, about backups I mentioned the need to be aware of what software you need to perform a recover.
In this case I need two files; the rclone
binary and the $HOME/.rclone.conf
configuration. Now the binary I can store in the Amazon drive as well; if
I need it then I can download it via a web browser.
More critically is the configuration file; this contains the encryption
strings for my data, and so I don’t want this easily visible. So this
file I stored in my password manager (lastpass
).
So now all I need to bootstrap my recover process is a web browser and
knowledge of the lastpass
password.
Simplified restore
In the above tests I copied all the files back before comparing. However,
rclone
also has experimental FUSE support for mounting. It works best
for reading. So now I could do:
$ sudo /bin/fusermount -u /mnt/Amazon
[1] + Done sudo rclone mount backups: /mnt/Amazon &
$
$ sudo rclone mount backups: /mnt/Amazon &
[1] 29086
$ sudo diff /BACKUP/linode/linode/20170312.log /mnt/Amazon/linode/linode
$ sudo /bin/fusermount -u /mnt/Amazon
[1] + Done sudo rclone mount backups: /mnt/Amazon &
$
Direct access to the files without a lot of temporary storage needed! Which might be important after a disaster, if I’m short of disk space :-)
Behaviour change
Media storage
Unlimited storage has a few other benefits as well. I have ripped all my CDs to FLAC format. That’s another 300Gb or so. If my system died I’d be annoyed to have to re-rip all the CDs (which may not be available after a fire). But with unlimited storage…
In this case I don’t want Amazon to see the filenames, just in case they have some automated scanning tool looking for potentially pirated content (I do own these CDs, but would an automation tool know that?).
In this case the config entry looks like
[flac]
type = crypt
remote = Amazon:My_CDs
filename_encryption = standard
password = *IWONTTELL*
password2 = *IWONTTELL*
Now I can rclone sync
my FLAC files up. Encrypted filenames look a
little odd:
$ rclone ls Amazon:My_CDs | head -3
30038497 c87a5pke85pd9ev7gp52aftecj8eokp19p2gn900pkkdb9ngr4n0/042hle8jn609itgohdh0q4gt6q7b3smbndn1cfjkklnsm2qm5l06u7au2ufme7jgne1oc8c7b68j4oarc9ebgks6faufrdeel3qv5j0
32853891 c87a5pke85pd9ev7gp52aftecj8eokp19p2gn900pkkdb9ngr4n0/p705nk3k7j9ns64ki3bf7go7c8jq9q439j2avlcu6ipbpi4qvjmg
23424207 c87a5pke85pd9ev7gp52aftecj8eokp19p2gn900pkkdb9ngr4n0/bf8e4mj9d7184f3021mdhbgua876d40stfa18ao5kfjl2sinaie805qc12r86f1jc32flct9ova8i
Again I verified the data I uploaded looked correct.
Amazon now claims I’m using 650Gb of storage.
I’m now thinking whether I should also upload my ripped DVDs as well. I don’t see why not…
Unlimited backups
Previously my backups retained only 14 days of history. The overnight process would delete files over that age, to keep disk usage in check.
But with unlimited storage I could start keeping offsite backups for months, if not years.
This also helps with any possible ransomware issue; if something did encrypt my local files and corrupted the backups but kept the sync process unbroken then the last 14 days of offsite backups might get broken, but older files won’t show locally and so won’t be touched. Annoying but not critical.
Automation
With the backups and music now available, I can automate the offsite copying as part of the overnight job:
#!/bin/ksh -p
rc()
{
echo
cd /$1 || exit
/home/sweh/bin/rclone --stats=0 --max-size=40000M --config /home/sweh/.rclone.conf $2 . $3:
}
rc /BACKUP copy backups
rc /Media/audio sync flac
Note the the backup uses copy
(so new files get copied up, old files get
left) but the music uses sync
; if I sell or otherwise get rid of a CD
then I delete the flac files and the offsite copy will also get deleted.
Today was Sunday, so a level 0 backup. The output from the job reports that it took just over 4 hours to upload all the backups:
2017/03/12 12:46:51 Encrypted amazon drive root 'Backups': Waiting for checks to
finish
2017/03/12 12:46:51 Encrypted amazon drive root 'Backups': Waiting for transfers
to finish
2017/03/12 13:56:15
Transferred: 117.701 GBytes (7.795 MBytes/s)
Errors: 0
Checks: 650
Transferred: 48
Elapsed time: 4h17m41.2s
The daily incremental typically takes under 7 minutes.
Limitations
rclone
and ACD have some limitations. These are what I’ve hit, so far:
- No files larger than 50Gb
- Empty directories are not copied
- Symbolic links aren’t copied (the latest DEV version has an option to follow links, but can’t copy the link itself)
- Large filenames (eg over 160chars) break if using filename encryption
Conclusion
Cloud storage for offsite backups are now a viable technology… if you have a fast enough internet connection. Beware of hidden costs in your tool of choice and make sure you can recover from a complete failure of your hardware.
In my case rclone
and ACD make a good scriptable solution.
There is now no reason for home users to not have offsite solutions, just like large enterprises!