From: | Mike Orr <sluggoster(at)gmail(dot)com> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: pg_restore |
Date: | 2011-04-12 21:09:57 |
Message-ID: | BANLkTi=koiyFRm+4p9AQ7Tunyvow=iyaRw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Fri, Apr 1, 2011 at 2:39 PM, Merlin Moncure <mmoncure(at)gmail(dot)com> wrote:
> On Wed, Mar 30, 2011 at 3:56 PM, Mike Orr <sluggoster(at)gmail(dot)com> wrote:
>> I'm converting a MySQL webapp to PostgreSQL. I have a backup server
>> which is refreshed twice daily with mysqldump/mysql and has a
>> continuously-running copy of the webapp. I want to replicate this with
>> pg_dump/pg_restore. Ideally I'd like to restore just a few tables,
>> without stopping the webapp and closing/reopening its connections. Is
>> this possible with pg_restore?
>>
>> MySQL simply locks the tables, drops/recreates them, loads the data,
>> and unlocks the tables. Other connections have to wait but they don't
>> have to be closed/reopened. The PostgreSQL manual recommends restoring
>> into an empty database using template0, which would require first
>> closing the other connections and then dropping the database. It would
>> also take unnecessary time to recreate the database and tables that
>> aren't changing. So I'm wondering if there's a less obtrusive way to
>> refresh the data.
>>
>> The tables to be updated do have 1:many relationships. How would I
>> restore all of them at once? List them all as '-t' args to the same
>> pg_restore command?
>
> pg_dump and pg_restore can do it. pg_dump -Fc is just like regular
> dump but you can pull out specific tables by name. or you can just
> take are regular backup and just pg_dump out specific tables with the
> proper options set in a small script.
>
> because pg is transactional, you can do this any time of the day
> without blocking users (although there may be a small performance
> hit).
I'm tentatively going with a psql script:
"""
\set quiet 1
\timing off
BEGIN;
TRUNCATE incidents, entries, locator, events;
\i /tmp/sync-pg/hotline.sql
\i /tmp/sync-pg/locator.sql
\i /tmp/sync-pg/events.sql
COMMIT;
ANALYZE;
"""
And a shell script that can do the saving, transfer, and loading in
discrete parts. (So I can have the same script on both hosts, and do
the parts individually for testing or together for production):
"""
usage: daily-pg [-hdtlk]
Synchronize Postgres data to the backup server. Options:
-d: Dump the data to /tmp/sync-pg
-t HOSTNAME: Rsync the data to the specified host
-l: Load the data from /tmp/sync-pg
-k: Keep the dump directory after loading (otherwise delete it)
-h: Print this help message and exit
If no options, do nothing.
"""
Dumping also compresses the files, and loading uncompresses them if
they're compressed.
--
Mike Orr <sluggoster(at)gmail(dot)com>
From | Date | Subject | |
---|---|---|---|
Next Message | Merlin Moncure | 2011-04-12 22:13:01 | Re: [GENERAL] PostgreSQL backend process high memory usage issue |
Previous Message | Clemens Eisserer | 2011-04-12 20:37:07 | Re: Howto use "COPY FROM" with the native API? |