重复性增量备份耗时太长

时间:2017-03-26 16:20:18

标签: amazon-s3 duplicity

我有双重运行每日增量备份到S3。大约37 GiB。

在第一个月左右,它没问题。它曾经在大约1小时内完成。然后它开始花费太长时间来完成任务。现在,在我输入时,它仍在运行7小时前开始的每日备份。

我正在运行两个命令,首先进行备份然后清理:

duplicity --full-if-older-than 1M LOCAL.SOURCE S3.DEST --volsize 666 --verbosity 8
duplicity remove-older-than 2M S3.DEST

日志

Temp has 54774476800 available, backup will use approx 907857100.

所以临界有足够的空间,很好。然后它从这开始......

Copying duplicity-full-signatures.20161107T090303Z.sigtar.gpg to local cache.
Deleting /tmp/duplicity-ipmrKr-tempdir/mktemp-13tylb-2
Deleting /tmp/duplicity-ipmrKr-tempdir/mktemp-NanCxQ-3
[...]
Copying duplicity-inc.20161110T095624Z.to.20161111T103456Z.manifest.gpg to local cache.
Deleting /tmp/duplicity-ipmrKr-tempdir/mktemp-VQU2zx-30
Deleting /tmp/duplicity-ipmrKr-tempdir/mktemp-4Idklo-31
[...]

这一直持续到今天,每个文件花费很长时间。继续这个......

Ignoring incremental Backupset (start_time: Thu Nov 10 09:56:24 2016; needed: Mon Nov  7 09:03:03 2016)
Ignoring incremental Backupset (start_time: Thu Nov 10 09:56:24 2016; needed: Wed Nov  9 18:09:07 2016)
Added incremental Backupset (start_time: Thu Nov 10 09:56:24 2016 / end_time: Fri Nov 11 10:34:56 2016)

很长一段时间后......

Warning, found incomplete backup sets, probably left from aborted session
Last full backup date: Sun Mar 12 09:54:00 2017
Collection Status
-----------------
Connecting with backend: BackendWrapper
Archive dir: /home/user/.cache/duplicity/700b5f90ee4a620e649334f96747bd08

Found 6 secondary backup chains.
Secondary chain 1 of 6:
-------------------------
Chain start time: Mon Nov  7 09:03:03 2016
Chain end time: Mon Nov  7 09:03:03 2016
Number of contained backup sets: 1
Total number of contained volumes: 2
Type of backup set:                            Time:      Num volumes:
               Full         Mon Nov  7 09:03:03 2016                 2
-------------------------
Secondary chain 2 of 6:
-------------------------
Chain start time: Wed Nov  9 18:09:07 2016
Chain end time: Wed Nov  9 18:09:07 2016
Number of contained backup sets: 1
Total number of contained volumes: 11
Type of backup set:                            Time:      Num volumes:
               Full         Wed Nov  9 18:09:07 2016                11
-------------------------

Secondary chain 3 of 6:
-------------------------
Chain start time: Thu Nov 10 09:56:24 2016
Chain end time: Sat Dec 10 09:44:31 2016
Number of contained backup sets: 31
Total number of contained volumes: 41
Type of backup set:                            Time:      Num volumes:
               Full         Thu Nov 10 09:56:24 2016                11
        Incremental         Fri Nov 11 10:34:56 2016                 1
        Incremental         Sat Nov 12 09:59:47 2016                 1
        Incremental         Sun Nov 13 09:57:15 2016                 1
        Incremental         Mon Nov 14 09:48:31 2016                 1
        [...]

列出所有连锁店后:

Also found 0 backup sets not part of any chain, and 1 incomplete backup set.
These may be deleted by running duplicity with the "cleanup" command.

这只是备份部分。这需要花费数小时才能完成,只需10分钟即可将37 GiB上传到S3。

ElapsedTime 639.59 (10 minutes 39.59 seconds)
SourceFiles 288
SourceFileSize 40370795351 (37.6 GB)

然后是清理,这给了我:

Cleaning up
Local and Remote metadata are synchronized, no sync needed.
Warning, found incomplete backup sets, probably left from aborted session
Last full backup date: Sun Mar 12 09:54:00 2017
There are backup set(s) at time(s):
Tue Jan 10 09:58:05 2017
Wed Jan 11 09:54:03 2017
Thu Jan 12 09:56:42 2017
Fri Jan 13 10:05:05 2017
Sat Jan 14 10:24:54 2017
Sun Jan 15 09:49:31 2017
Mon Jan 16 09:39:41 2017
Tue Jan 17 09:59:05 2017
Wed Jan 18 09:59:56 2017
Thu Jan 19 10:01:51 2017
Fri Jan 20 09:35:30 2017
Sat Jan 21 09:53:26 2017
Sun Jan 22 09:48:57 2017
Mon Jan 23 09:38:45 2017
Tue Jan 24 09:54:29 2017
Which can't be deleted because newer sets depend on them.
Found old backup chains at the following times:
Mon Nov  7 09:03:03 2016
Wed Nov  9 18:09:07 2016
Sat Dec 10 09:44:31 2016
Mon Jan  9 10:04:51 2017
Rerun command with --force option to actually delete.

1 个答案:

答案 0 :(得分:0)

我发现了问题。由于存在问题,我按了this answer,并将此代码添加到我的脚本中:

rm -rf ~/.cache/deja-dup/*
rm -rf ~/.cache/duplicity/*

这应该是一次性的事情,因为随机的bug两面都有。但答案没有提到这一点。因此,每天脚本在同步缓存后立即删除缓存,然后,第二天它必须再次下载整个内容。

相关问题