duplicity - multi-site backup

This is one post in a series of tutorials on Duplicity. If you are just getting started with duplicity, I would recommend to head over to the first and second part to get a basic overview of duplicity and get GPG keys setup properly (this will not be covered in this post, but is required for the tutorial to work).
You should also have a look at the duplicity overview post, which contains a list of all posts in this series.
The basic configuration from this post will be used as basis in this post to configure duplicity-backup.sh.

Multi

In this part of the duplicity series, we will try to combine several backup services into the same duplictiy backup job by using the duplicity multi backend. When setup correctly, duplicity will fulfill the 3-2-1 backup strategy all by itself, by doing one backup on the local system, and one (or more!) to remote / cloud systems.

This backend allows duplicity to use multiple backend storages at the same time, either by extending storage (mode=stripe) or by mirroring the same backup to all configured backups (mode=mirror). I will be focusing on the mirror-setup in this post, as this makes the most sense from a security/backup perspective. Stripe-setup is only useful if there is a lot of data to backup with not enough space on one cloud service (while increasing the risk of corrupted backups).

basic configuration

To get started, we use a very basic json-based configuration:
Filename: /home/matt/multiconfig.conf

[
 {
  "description": "Local disk test",
  "url": "file:///home/matt/backupmulti"
 }
]

This setup will be almost identical to the initial post, where we ran the backup locally.

Now, let's run this once:

duplicity --encrypt-key 3E988E6866B39EE1 --sign-key E24E7891636093DB --ssh-askpass /home/matt/test/ "multi:///home/matt/multiconfig.conf"

When asked, insert your pass-phrases.

The output should look similar to mine:

duplicity --encrypt-key 3E988E6866B39EE1 --sign-key E24E7891636093DB --ssh-askpass /home/matt/test/ "multi:///home/matt/multiconfig.conf?mode=mirror&onfail=abort"
Local and Remote metadata are synchronized, no sync needed.
Last full backup date: none
GnuPG passphrase for decryption:
GnuPG passphrase for signing key:
No signatures found, switching to full backup.
--------------[ Backup Statistics ]--------------
StartTime 1518857776.80 (Sat Feb 17 09:56:16 2018)
EndTime 1518857777.17 (Sat Feb 17 09:56:17 2018)
ElapsedTime 0.37 (0.37 seconds)
SourceFiles 61
SourceFileSize 45086 (44.0 KB)
NewFiles 61
NewFileSize 45086 (44.0 KB)
DeletedFiles 0
ChangedFiles 0
ChangedFileSize 0 (0 bytes)
ChangedDeltaSize 0 (0 bytes)
DeltaEntries 61
RawDeltaSize 30 (30 bytes)
TotalDestinationSizeChange 1539 (1.50 KB)
Errors 0
-------------------------------------------------

Checking the folder we configured in the json-file above (/home/matt/backupmulti in my case) we can see that the backup was made into this folder.

Before moving to configuring a second backup site, delete the local backup (duplicity will not handle synchronizing files across multiple backup sites, so we either need to copy the existing backups to the remote backup location, or start fresh).

As we are just getting started, we can easily delete and backup again. If you already have a backup history, consider moving the existing files to all configured locations.

rm /home/matt/backupmulti/*

Now, let's extend the json with a remote-site. I use the configuration from the initial remote backup post.

[
 {
  "description": "Local disk test",
  "url": "file:///home/matt/backupmulti"
 },
 {
   "description": "Backup to sftp site",
   "url": "sftp://matt@backupserver:1234//home/matt/offsitebackup_multi"
 }
]

We simply added a second entry to the configuration file, backing up to a remote site. Let's run this again.

duplicity --encrypt-key 3E988E6866B39EE1 --sign-key E24E7891636093DB --ssh-askpass /home/matt/test/ "multi:///home/matt/multiconfig.conf?mode=mirror&onfail=abort"

Notice that this time i added ?mode=mirror&onfail=abort to the end of the configuration (quoting is necessary now!). This tells duplicity to fail should one of the backups not work correctly. Combine this with one of the notification posts and we will be notified if the backup did not work.
Now, let's check both our locally configured folder, as well as the remote folder. As i deleted the backup from the first try, the folders are now identical.

duplicity-backup.sh

Next, let's build this into our duplicity-backup script (basic configuration here).

Building on top of this configuration, we now simply change the DEST variable to the configuration used in the above command.

DEST='"multi:///home/matt/multiconfig.conf?mode=mirror&onfail=abort"'

Please notice the odd quoting - as this file will be sourced by the backup-script, we need to wrap the command we want to pass to duplicity in single-quotes - so the double-quoted string is then passed on to duplicity (with the quotes).
Let's run duplicity-backup.sh and see if our configuration does indeed work

./duplicity-backup.sh --config duplicity-backup.conf --full

Again, check both local and remote sftp folder. You should have another full backup file there, in addition to the one we had before.

We can also set this up for other backends like dropbox, simply pass in the environment variable we had to configure in the dropbox post DPBX_ACCESS_TOKEN.

[
 {
  "description": "Local disk test",
  "url": "file:///home/matt/backupmulti"
 },
 {
   "description": "Backup to dropbox",
   "url": "dpbx:///duplicitytest1",
   "env": [
     {
       "name": "DPBX_ACCESS_TOKEN",
       "value": "teBo-PD8bsUAA<redacted>"
     }
     ]
 }
]

Note that this will work for other backends like Onedrive, Hubic or Google drive as well.

Run the backup-script again.

./duplicity-backup.sh --config duplicity-backup.conf --full

Check your dropbox-folder - you should now have a new full backup in that folder.

Conclusion

In the first post of this series, I highlighted the backup-strategy everyone who cares about his data should use.

A basic rule for Backups is the 3-2-1 rule: 3 total copies of the data - on 2 different mediums, at least 1 offsite.

This part has now accomplished exactly that by having

  • 3 copies of the data:
    1. original version
    2. local backup
    3. remote backup
  • 2 different mediums
    1. local system
    2. remote system
  • 1 offsite
    1. remote backup in the cloud (or on a different system using sftp)

We could even go further, and configure multiple (or all) Cloud services at once, and have more than 3 copies of the data - but that is overkill most of the time, but will of course depend on the importance of the data.

This post will conclude the duplicity series for now.
We have now successfully setup duplicity, syncronized to various remote sites, setup notifications, and syncronized to multiple sites at once in this post.

Matthias

Read more posts by this author.