From: | Stephen Rees <srees(at)pandora(dot)com> |
---|---|
To: | Robert Treat <rob(at)xzilla(dot)net> |
Cc: | pgsql-admin(at)postgresql(dot)org |
Subject: | Re: Backup hot-standby database. |
Date: | 2011-03-18 20:55:16 |
Message-ID: | A71C2574-38E9-4EC4-A118-20E11546B7C8@pandora.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
Robert,
Thank you for reply. I had the wrong end of the stick regarding
pg_dump and hot-standby.
I will take a look at omnipitr, as you suggest.
Per your comment
> You have to stop replay while you are doing the dumps like this
how do I stop, then resume, replay with both the master and hot-
standby available throughout?
- Steve
On Mar 15, 2011, at 3:04 PM, Robert Treat wrote:
> On Tue, Mar 15, 2011 at 5:50 PM, Stephen Rees <srees(at)pandora(dot)com>
> wrote:
>> Using PostgreSQL 9.0.x
>>
>> I cannot use pg_dump to generate a backup of a database on a hot-
>> standby
>> server, because it is, by definition, read-only.
>
> That really makes no sense :-) You can use pg_dump on a read-only
> slave, but I think the issue that people tend to run into is that the
> pg_dump operations get canceled out by incoming changes before it can
> finish. You can of course modify the configs to work around this
> somewhat, but eventually it becomes a problem.
>
>> However, it seems that I
>> can use COPY TO within a serializable transaction to create a
>> consistent set
>> of data file(s). For example,
>>
>> BEGIN TRANSACTION;
>> SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
>> COPY t1 TO '/tmp/t1';
>> COPY t2 TO '/tmp/t2';
>>
>> ... etc ...
>>
>> COPY t<n> TO '/tmp/t<n>';
>> COMMIT TRANSACTION;
>>
>> I can then use pg_dump to export the corresponding database schema
>> from the
>> master DBMS.
>>
>> Is this going to scale to a multi-GB database, where it will take
>> hours to
>> export the data from all of the tables, or are there scalability
>> issues of
>> which I should be aware?
>>
>
> Well, basically that's in in a nutshell. You have to stop replay while
> you are doing the dumps like this, so eventually that delay becomes
> unbearable for most people (especially on the order of hours).
>
> There are several ways to work around this... you can use filesystem
> snapshots to make copies and dump from there; great if you have the
> option. If you don't you might want to look into omnipitr, it can
> create filesystem level backups from a slave (not the same as a
> logical export, but it might do).
>
>
> Robert Treat
> play: xzilla.net
> work: omniti.com
> hiring: l42.org/lg
From | Date | Subject | |
---|---|---|---|
Next Message | John P Weatherman | 2011-03-18 21:25:44 | Hot-standby/Reporting database. |
Previous Message | Nicholson, Brad (Toronto, ON, CA) | 2011-03-18 14:17:29 | Re: Memory limits and PostgreSQL 8.4 - Over 63 GB of memory for DB? |