Re: pg_dump slower than pg_restore

From: David Wall <d(dot)wall(at)computer(dot)org>
To: Bosco Rama <postgres(at)boscorama(dot)com>, pgsql-general(at)postgresql(dot)org
Subject: Re: pg_dump slower than pg_restore
Date: 2014-07-03 23:51:06
Message-ID: 53B5EC6A.9050806@computer.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general


On 7/3/2014 10:36 AM, Bosco Rama wrote:
> If those large objects are 'files' that are already compressed (e.g.
> most image files and pdf's) you are spending a lot of time trying to
> compress the compressed data ... and failing.
>
> Try setting the compression factor to an intermediate value, or even
> zero (i.e. no dump compression). For example, to get the 'low hanging
> fruit' compressed:
> $ pg_dump -Z1 -Fc ...
>
> IIRC, the default value of '-Z' is 6.
>
> As usual your choice will be a run-time vs file-size trade-off so try
> several values for '-Z' and see what works best for you.

That's interesting. Since I gzip the resulting output, I'll give -Z0 a
try. I didn't realize that any compression was on by default.

Thanks for the tip...

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message David Wall 2014-07-03 23:54:18 Re: pg_dump slower than pg_restore
Previous Message Nick Cabatoff 2014-07-03 23:50:51 Re: Why does autovacuum clean fewer rows than I expect?