duplicity backup - remote backup

Intro

In the first part of the duplicity series, we saw how to create GPG keys, create encrypted and signed backups locally and restore them.
Now, we are going to move data off the server (so we have an offsite-backup).
Also, we will schedule the backup using cron so the backup is run nightly.

I like to manually run the commands before scripting stuff, so i fully understand what is happening and can discover problems before the script is failing daily. Therefore, the script implementation will be the last step.

Note: As in the first part, this tutorial will use root on the local system. On the remote system, the user "matt" will be used.

sftp

First, let's make sure we can connect to the backup-server:

# ssh backupuser@backupserver [-p port]

If you have not yet created the SSH keys (we will do this in the next step) - then this should ask for the password of the remote user.

Next - let's try to run a full backup.

# duplicity --encrypt-key [encrypt-key] --sign-key [sign-key] --ssh-askpass [folder-to-backup] sftp://[user]@[backupserver]:[port]//[folderonremotehost]

duplicity --encrypt-key 3E988E6866B39EE1 --sign-key E24E7891636093DB --ssh-askpass /tmp/backupTest/ sftp://matt@backupserver:1234//home/matt/offsitebackup 

Please note that the port can be omitted if the default SSH port (22) is used. This will again ask for the remote users password, as well as the password for the Sign-key, so we need to be carefull when we insert which password.

Restore / verify operations follow the same logic as explained in the first part but by exchanging the target plugin file://[targetfolder] with sftp://[targetfolder].
To check that the backup was indeed made, SSH into the backup server and check the backup-folder (ls -l /home/matt/offsitebackup)

Now this was easy - now let's wrap this into a little script which can run automatically.

SSH Keys

For the following scripts to work, we will need password-less SSH setup.
This is as easy as running the following command and accepting all defaults.
As this key allows everyone in possession of the key to login to the remote server, we will use the password SSHTest1! as password for the key. As always, this is a sample password with the intend to be easy and clear to the reader, in reality I am using a much stronger password.

ssh-keygen -b 4096 -t rsa

After the key is generated, we copy the key to the target server with the following command.

#ssh-copy-id [user]@[targethost] -p [port]
ssh-copy-id matt@backupserver -p 1234

Now, we test access using the key:

[user]@[targethost] -p [port]
ssh matt@backupserver -p 1234

If everything is working correctly, you should see the welcome message from the backup server.

First script (selfmade)

We will now put what we learned into a very simple script.
In the next part, we will switch to a better version of the script - but for now we do it "all by ourselves".
Open your favorite editor and paste the following script to a new file called backupscript.sh.
Please use your secure password and key-id's you generated in part 1.

#/bin/bash

# passwod to ssh key
export FTP_PASSWORD="SSHTest1!"
# password to the encrypt key - set to empty so duplicity is not asking
export PASSPHRASE=""
# password to the Sign key
export SIGN_PASSPHRASE="TestSig1!"

ENCRYPT_KEY=3E988E6866B39EE1
SIGN_KEY=E24E7891636093DB
# Local source folder
SOURCE="/tmp/backupTest/"
# (remote) target folder
TARGET="sftp://matt@backupserver:1234//home/matt/offsitebackup"

duplicity --full-if-older-than 5D --encrypt-key ${ENCRYPT_KEY} --sign-key ${SIGN_KEY} ${SOURCE} ${TARGET} 

Make the file executable by running chmod +x backupscript.sh.
To keep the passwords to ourselves (the root user) - we also modify the permissions to disallow everything from other users chmod 700 backupscript.sh.

In the script, I also added the option --full-if-older-than 5D - which we did not use before and has duplicity do a full backup if the last full backup is older than 5 days.

let's run the script

./backupscript.sh
output:
./backupscript.sh 
Local and Remote metadata are synchronized, no sync needed.
Last full backup date: Sat Oct 21 10:54:06 2017
--------------[ Backup Statistics ]--------------
StartTime 1508576169.90 (Sat Oct 21 10:56:09 2017)
EndTime 1508576169.96 (Sat Oct 21 10:56:09 2017)
ElapsedTime 0.06 (0.06 seconds)
SourceFiles 61
SourceFileSize 45086 (44.0 KB)
NewFiles 0
NewFileSize 0 (0 bytes)
DeletedFiles 0
ChangedFiles 0
ChangedFileSize 0 (0 bytes)
ChangedDeltaSize 0 (0 bytes)
DeltaEntries 0
RawDeltaSize 0 (0 bytes)
TotalDestinationSizeChange 732 (732 bytes)
Errors 0
-------------------------------------------------

As we can see, the backup completed successfully.
We can try a restore (documentation on that is in part 1).

Schedule

It is important to run backups regularly - therefore we will now use "crontab" to schedule the backup process.
There are 2 possibilities to schedule tasks with crontab - either by user crontab -e or in the file /etc/crontab.
I will be using /etc/crontab for this tutorial. This means, only super-users (root) can change the schedule (or disable backups) - however we can still define which user the backup should run as.
As the initial explanation in the file says:

Unlike any other crontab you don't have to run the `crontab' command to install the new version when you edit this file and files in /etc/cron.d. These files also have username fields, that none of the other crontabs do.

Let's open the crontab file (we need to be root for this, so prefix the command with sudo).
and add the following line at the end:

# m h dom mon dow user	command
# 0 1 * * * [user] [full_path_to_backup_script] >> [path_to_log_file] 2>&1

0 1 * * * root /root/backupscript.sh >> /var/log/backupscript.log 2>&1

By using 0 1 * * * we run this backup-script daily at 1 am. Also, we specify /var/log/backupscript.log as log-file. We can now wait overnight and then check the log-file. The content will be similar to the output above.

Conclusion

This was fairly simple - however haven't reached our final goal yet. Remember, we wanted to have 3 copies of our data, in at least 2 different locations, with at least 1 backup offsite. Until now, we either have a backup on the same server (which will not help much in case of a server crash) - or a backup at a remote server (which may be inconvenient if we delete a wrong file by accident).

In the next part, we will switch from our home-made quick and dirty script to a open source duplicity-backup script which gives us more options out of the box without the need to reinvent the wheel.
Also, we will have a look at the "multi" duplicity plugin, which allows us to specify multiple targets for the same backup.

Matthias

Read more posts by this author.