From: | Bill Moran <wmoran(at)potentialtech(dot)com> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: pg_dump with 1100 schemas being a bit slow |
Date: | 2009-10-07 15:54:54 |
Message-ID: | 20091007115454.5b5e369a.wmoran@potentialtech.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
In response to "Loic d'Anterroches" <diaeresis(at)gmail(dot)com>:
> On Wed, Oct 7, 2009 at 4:23 PM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> > "Loic d'Anterroches" <diaeresis(at)gmail(dot)com> writes:
> >> Each night I am running:
> >> pg_dump --blobs --schema=%s --no-acl -U postgres indefero | gzip >
> >> /path/to/backups/%s/%s-%s.sql.gz
> >> this for each installation, so 1100 times. Substitution strings are to
> >> timestamp and get the right schema.
Have you tested the speed without the gzip?
We found that compressing the dump takes considerably longer than pg_dump
does, but pg_dump can't release its locks until gzip has completely
processed all of the data, because of the pipe.
By doing the pg_dump in a different step than the compression, we were
able to eliminate our table locking issues, i.e.:
pg_dump --blobs --schema=%s --no-acl -U postgres indefero > /path/to/backups/%s/%s-%s.sql && gzip /path/to/backups/%s/%s-%s.sql
Of course, you'll need enough disk space to store the uncompressed
dump while gzip works.
--
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran/
From | Date | Subject | |
---|---|---|---|
Next Message | Massa, Harald Armin | 2009-10-07 16:00:50 | Re: pg_dump with 1100 schemas being a bit slow |
Previous Message | Loic d'Anterroches | 2009-10-07 15:48:24 | Re: pg_dump with 1100 schemas being a bit slow |