Re: pg_dump with 1100 schemas being a bit slow

From: "Loic d'Anterroches" <diaeresis(at)gmail(dot)com>
To: pgsql-general(at)postgresql(dot)org
Subject: Re: pg_dump with 1100 schemas being a bit slow
Date: 2009-10-07 16:20:59
Message-ID: 8e2f2cb20910070920l1b63cfdek7391a9480dda73e3@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Wed, Oct 7, 2009 at 5:54 PM, Bill Moran <wmoran(at)potentialtech(dot)com> wrote:
> In response to "Loic d'Anterroches" <diaeresis(at)gmail(dot)com>:
>
>> On Wed, Oct 7, 2009 at 4:23 PM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>> > "Loic d'Anterroches" <diaeresis(at)gmail(dot)com> writes:
>> >> Each night I am running:
>> >> pg_dump --blobs --schema=%s --no-acl -U postgres indefero | gzip >
>> >> /path/to/backups/%s/%s-%s.sql.gz
>> >> this for each installation, so 1100 times. Substitution strings are to
>> >> timestamp and get the right schema.
>
> Have you tested the speed without the gzip?

This is the first thing I did but in that case I was not able to get
any significant improvement. The data to gzip is very small "per
schema" so this is not the bottleneck.

> We found that compressing the dump takes considerably longer than pg_dump
> does, but pg_dump can't release its locks until gzip has completely
> processed all of the data, because of the pipe.

Good tip, I keep that in mind for the future!

Thanks,
loïc

--
Loïc d'Anterroches - Céondo Ltd - http://www.ceondo.com

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Loic d'Anterroches 2009-10-07 16:25:00 Re: pg_dump with 1100 schemas being a bit slow
Previous Message Massa, Harald Armin 2009-10-07 16:00:50 Re: pg_dump with 1100 schemas being a bit slow