Re: pg_dump slower than pg_restore

From: David Wall <d(dot)wall(at)computer(dot)org>
To: pgsql-general(at)postgresql(dot)org
Subject: Re: pg_dump slower than pg_restore
Date: 2014-07-04 00:30:50
Message-ID: 53B5F5BA.2070600@computer.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general


On 7/3/2014 5:13 PM, Bosco Rama wrote:
> If you use gzip you will be doing the same 'possibly unnecessary'
> compression step. Use a similar approach to the gzip command as you
> would for the pg_dump command. That is, use one if the -[0-9] options,
> like this: $ pg_dump -Z0 -Fc ... | gzip -[0-9] ...

Bosco, maybe you can recommend a different approach. I pretty much run
daily backups that I only have for disaster recovery. I generally don't
do partials recoveries, so I doubt I'd ever modify the dump output. I
just re-read the docs about formats, and it's not clear what I'd be best
off with, and "plain" is the default, but it doesn't say it can be used
with pg_restore.

Maybe the --format=c isn't the fastest option for me, and I'm less sure
about the compression. I do want to be able to restore using pg_restore
(unless plain is the best route, in which case, how do I restore that
type of backup?), and I need to include large objects (--oids), but
otherwise, I'm mostly interested in it being as quick as possible.

Many of the large objects are gzip compressed when stored. Would I be
better off letting PG do its compression and remove gzip, or turn off
all PG compression and use gzip? Or perhaps use neither if my large
objects, which take up the bulk of the database, are already compressed?

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message sunpeng 2014-07-04 00:33:57 Re: which odbc version (32 or 64 bit) should be installed in Client ?
Previous Message Adrian Klaver 2014-07-04 00:17:54 Re: which odbc version (32 or 64 bit) should be installed in Client ?