From: | Matthew Wakeling <matthew(at)flymine(dot)org> |
---|---|
To: | David Newall <postgresql(at)davidnewall(dot)com> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: pg_dump far too slow |
Date: | 2010-03-18 12:15:01 |
Message-ID: | alpine.DEB.2.00.1003181113060.1887@aragorn.flymine.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Sun, 14 Mar 2010, David Newall wrote:
> nohup time pg_dump -f database.dmp -Z9 database
>
> I presumed pg_dump was CPU-bound because of gzip compression, but a test I
> ran makes that seem unlikely...
There was some discussion about this a few months ago at
http://archives.postgresql.org/pgsql-performance/2009-07/msg00348.php
It seems that getting pg_dump to do the compression is a fair amount
slower than piping the plain format dump straight through gzip. You get a
bit more parallelism that way too.
Matthew
--
I'm always interested when [cold callers] try to flog conservatories.
Anyone who can actually attach a conservatory to a fourth floor flat
stands a marginally better than average chance of winning my custom.
(Seen on Usenet)
From | Date | Subject | |
---|---|---|---|
Next Message | Corin | 2010-03-18 14:31:18 | mysql to postgresql, performance questions |
Previous Message | Dimitri Fontaine | 2010-03-18 10:24:25 | Re: shared_buffers advice |