From: | "Massa, Harald Armin" <chef(at)ghum(dot)de> |
---|---|
To: | "Loic d'Anterroches" <diaeresis(at)gmail(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: pg_dump with 1100 schemas being a bit slow |
Date: | 2009-10-07 16:00:50 |
Message-ID: | e3e180dc0910070900w754c63d1wf80032f1cceb2700@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Loic,
>settings up each time. The added benefit of doing a per schema dump is
>that I provide it to the users directly, that way they have a full
>export of their data.
you should try the timing with
pg_dump --format=c completedatabase.dmp
and then generating the separte schemas in an extra step like
pg_restore --schema=%s --file=outputfilename.sql completedatabase.dmp
I found that even with maximum compression
pg_dump --format=c --compress=9
the pg_dump compression was quicker then dump + gzip/bzip/7z compression
afterwards.
And after the dumpfile is created, pg_restore will leave your database
alone.
(make sure to put completedatabase.dmp on a separate filesystem). You can
even try to run more then one pg_restore --file in parallel.
Best wishes,
Harald
--
GHUM Harald Massa
persuadere et programmare
Harald Armin Massa
Spielberger Straße 49
70435 Stuttgart
0173/9409607
no fx, no carrier pigeon
-
%s is too gigantic of an industry to bend to the whims of reality
From | Date | Subject | |
---|---|---|---|
Next Message | Loic d'Anterroches | 2009-10-07 16:20:59 | Re: pg_dump with 1100 schemas being a bit slow |
Previous Message | Bill Moran | 2009-10-07 15:54:54 | Re: pg_dump with 1100 schemas being a bit slow |