From: | Claudio Freire <klaussfreire(at)gmail(dot)com> |
---|---|
To: | Stephen Frost <sfrost(at)snowman(dot)net> |
Cc: | Tatsuo Ishii <ishii(at)postgresql(dot)org>, PostgreSQL-Dev <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Implementing incremental backup |
Date: | 2013-06-19 22:02:28 |
Message-ID: | CAGTBQpa-URuu4Oc2NT0Prm7-N0GifLMv-6FeLNjf2YqaPCDLvA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Wed, Jun 19, 2013 at 6:20 PM, Stephen Frost <sfrost(at)snowman(dot)net> wrote:
> * Claudio Freire (klaussfreire(at)gmail(dot)com) wrote:
>> I don't see how this is better than snapshotting at the filesystem
>> level. I have no experience with TB scale databases (I've been limited
>> to only hundreds of GB), but from my limited mid-size db experience,
>> filesystem snapshotting is pretty much the same thing you propose
>> there (xfs_freeze), and it works pretty well. There's even automated
>> tools to do that, like bacula, and they can handle incremental
>> snapshots.
>
> Large databases tend to have multiple filesystems and getting a single,
> consistent, snapshot across all of them while under load is..
> 'challenging'. It's fine if you use pg_start/stop_backup() and you're
> saving the XLOGs off, but if you can't do that..
Good point there.
I still don't like the idea of having to mark each modified page. The
WAL compressor idea sounds a lot more workable. As in scalable.
From | Date | Subject | |
---|---|---|---|
Next Message | Kevin Grittner | 2013-06-19 22:03:15 | Re: Git-master regression failure |
Previous Message | Jeff Janes | 2013-06-19 22:01:44 | Re: Vacuum/visibility is busted |