From: | "Scott Marlowe" <smarlowe(at)qwest(dot)net> |
---|---|
To: | "Karim Nassar" <Karim(dot)Nassar(at)NAU(dot)EDU> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Duplicating a database |
Date: | 2004-10-24 06:02:26 |
Message-ID: | 1098597745.21035.126.camel@localhost.localdomain |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Sat, 2004-10-23 at 22:22, Karim Nassar wrote:
> If you just need a working copy, not necessarily right up to date at any
> > time, you can just dump and restore it:
> >
> > pg_dumpall -h source_server |psql -h dest_server
> >
> > add switches as necessary.
>
> That would be great for the first time. But what I want to do is copy
> ~postgresql/data, stomping/deleting as necessary. Roughly, my thinking
> is a daily cron job on the server:
>
> rm -rf /safe/dir/data
> /etc/init.d/postgresql stop
> tar czf - -C ~postgres data | tar xzf - -C /safe/dir/
> /etc/init.d/postgresql start
>
>
> And a client script:
>
> /etc/init.d/postgresql stop
> rm -rf ~postgres/data
> ssh user(at)server tar czf - -C /safe/dir data|tar xvzf - -C ~postgres
> /etc/init.d/postgresql start
>
> Or something similar with rsync instead of tar.
Assuming there's only one or two databases in the cluster, it would be
pretty easy to just do a
dropdb -h dest dbname1
dropdb -h dest dbname2
createdb dbname1
createdb dbname2
pg_dump -h source dbname1|psql -h dest
pg_dump -h source dbname2|psql -h dest
That way there's no need to take down the source server or do anything
special to it.
From | Date | Subject | |
---|---|---|---|
Next Message | Pierre-Frédéric Caillaud | 2004-10-24 06:53:52 | Re: field incrementing in a PL/pgSQL trigger |
Previous Message | Bruno Wolff III | 2004-10-24 05:22:41 | Re: '1 year' = '360 days' ???? |