From: | Marco Nenciarini <marco(dot)nenciarini(at)2ndquadrant(dot)it> |
---|---|
To: | |
Cc: | PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Proposal: Incremental Backup |
Date: | 2014-07-29 16:11:10 |
Message-ID: | 53D7C79E.6080003@2ndquadrant.it |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Il 25/07/14 16:15, Michael Paquier ha scritto:
> On Fri, Jul 25, 2014 at 10:14 PM, Marco Nenciarini
> <marco(dot)nenciarini(at)2ndquadrant(dot)it> wrote:
>> 0. Introduction:
>> =================================
>> This is a proposal for adding incremental backup support to streaming
>> protocol and hence to pg_basebackup command.
> Not sure that incremental is a right word as the existing backup
> methods using WAL archives are already like that. I recall others
> calling that differential backup from some previous threads. Would
> that sound better?
>
"differential backup" is widely used to refer to a backup that is always
based on a "full backup". An "incremental backup" can be based either on
a "full backup" or on a previous "incremental backup". We picked that
name to emphasize this property.
>> 1. Proposal
>> =================================
>> Our proposal is to introduce the concept of a backup profile.
> Sounds good. Thanks for looking at that.
>
>> The backup
>> profile consists of a file with one line per file detailing tablespace,
>> path, modification time, size and checksum.
>> Using that file the BASE_BACKUP command can decide which file needs to
>> be sent again and which is not changed. The algorithm should be very
>> similar to rsync, but since our files are never bigger than 1 GB per
>> file that is probably granular enough not to worry about copying parts
>> of files, just whole files.
> There are actually two levels of differential backups: file-level,
> which is the approach you are taking, and block level. Block level
> backup makes necessary a scan of all the blocks of all the relations
> and take only the data from the blocks newer than the LSN given by the
> BASE_BACKUP command. In the case of file-level approach, you could
> already backup the relation file after finding at least one block
> already modified.
I like the idea of shortcutting the checksum when you find a block with
a LSN newer than the previous backup START WAL LOCATION, however I see
it as a further optimization. In any case, it is worth storing the
backup start LSN in the header section of the backup_profile together
with other useful information about the backup starting position.
As a first step we would have a simple and robust method to produce a
file-level incremental backup.
> Btw, the size of relation files depends on the size
> defined by --with-segsize when running configure. 1GB is the default
> though, and the value usually used. Differential backups can reduce
> the size of overall backups depending on the application, at the cost
> of some CPU to analyze the relation blocks that need to be included in
> the backup.
We tested the idea on several multi-terabyte installations using a
custom deduplication script which follows this approach. The result is
that it can reduce the backup size of more than 50%. Also most of
databases in the range 50GB - 1TB can take a big advantage of it.
>
>> It could also be used in 'refresh' mode, by allowing the pg_basebackup
>> command to 'refresh' an old backup directory with a new backup.
> I am not sure this is really helpful...
Could you please elaborate the last sentence?
>
>> The final piece of this architecture is a new program called
>> pg_restorebackup which is able to operate on a "chain of incremental
>> backups", allowing the user to build an usable PGDATA from them or
>> executing maintenance operations like verify the checksums or estimate
>> the final size of recovered PGDATA.
> Yes, right. Taking a differential backup is not difficult, but
> rebuilding a constant base backup with a full based backup and a set
> of differential ones is the tricky part, but you need to be sure that
> all the pieces of the puzzle are here.
If we limit it to be file-based, the recover procedure is conceptually
simple. Read every involved manifest from the start and take the latest
available version of any file (or mark it for deletion, if the last time
it is named is in a backup_exceptions file). Keeping the algorithm as
simple as possible is in our opinion the best way to go.
>
>> We created a wiki page with all implementation details at
>> https://wiki.postgresql.org/wiki/Incremental_backup
> I had a look at that, and I think that you are missing the shot in the
> way differential backups should be taken. What would be necessary is
> to pass a WAL position (or LSN, logical sequence number like
> 0/2000060) with a new clause called DIFFERENTIAL (INCREMENTAL in your
> first proposal) in the BASE BACKUP command, and then have the server
> report back to client all the files that contain blocks newer than the
> given LSN position given for file-level backup, or the blocks newer
> than the given LSN for the block-level differential backup.
In our proposal a file is skipped only, and only if it has the same
size, the same mtime and *the same checksum* of the original file. We
intentionally want to keep it simple, easily supporting also files that
are stored in $PGDATA but don't follow any format known by Postgres.
However, even with more complex algorithms, all the required information
should be stored in the header part of the backup_profile file.
> Note that we would need a way to identify the type of the backup taken
> in backup_label, with the LSN position sent with DIFFERENTIAL clause
> of BASE_BACKUP, by adding a new field in it.
Good point, It has to be definitely reported in the backup_label file.
>
> When taking a differential backup, the LSN position necessary would be
> simply the value of START WAL LOCATION of the last differential or
> full backup taken. This results as well in a new option for
> pg_basebackup of the type --differential='0/2000060' to take directly
> a differential backup.
It's possible to use this approach, but I feel that relying on checksums
is more robust. In any case I'd want to have a file with all the
checksums to be able to validate it later.
>
> Then, for the utility pg_restorebackup, what you would need to do is
> simply to pass a list of backups to it, then validate if they can
> build a consistent backup, and build it.
>
> Btw, the file-based method would be simpler to implement, especially
> for rebuilding the backups.
>
> Regards,
>
Exactly. This is the bare minimum. More options can be added later.
Regards,
Marco
--
Marco Nenciarini - 2ndQuadrant Italy
PostgreSQL Training, Services and Support
marco(dot)nenciarini(at)2ndQuadrant(dot)it | www.2ndQuadrant.it
From | Date | Subject | |
---|---|---|---|
Next Message | Marco Nenciarini | 2014-07-29 16:24:12 | Re: Proposal: Incremental Backup |
Previous Message | Bruce Momjian | 2014-07-29 14:43:12 | Re: New developer TODO suggestions |