From: | "Loic d'Anterroches" <diaeresis(at)gmail(dot)com> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: pg_dump with 1100 schemas being a bit slow |
Date: | 2009-10-07 16:25:00 |
Message-ID: | 8e2f2cb20910070925j55700594n2e37e818f5053348@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Harald,
>>settings up each time. The added benefit of doing a per schema dump is
>>that I provide it to the users directly, that way they have a full
>>export of their data.
>
> you should try the timing with
>
> pg_dump --format=c completedatabase.dmp
>
> and then generating the separte schemas in an extra step like
>
> pg_restore --schema=%s --file=outputfilename.sql completedatabase.dmp
>
> I found that even with maximum compression
>
> pg_dump --format=c --compress=9
>
> the pg_dump compression was quicker then dump + gzip/bzip/7z compression
> afterwards.
>
> And after the dumpfile is created, pg_restore will leave your database
> alone.
> (make sure to put completedatabase.dmp on a separate filesystem). You can
> even try to run more then one pg_restore --file in parallel.
Yummy! The speed of a full dump and the benefits of the per schema
dump for the users. I will try this one tonight when the load is low.
I will keep you informed of the results.
Thanks a lot for all the good ideas, pointers!
loïc
--
Loïc d'Anterroches - Céondo Ltd - http://www.ceondo.com
From | Date | Subject | |
---|---|---|---|
Next Message | Joshua D. Drake | 2009-10-07 16:29:21 | Re: pg_dump with 1100 schemas being a bit slow |
Previous Message | Loic d'Anterroches | 2009-10-07 16:20:59 | Re: pg_dump with 1100 schemas being a bit slow |