From: | Francisco Olarte <folarte(at)peoplecall(dot)com> |
---|---|
To: | anj patnaik <patna73(at)gmail(dot)com> |
Cc: | Scott Mead <scottm(at)openscg(dot)com>, Guillaume Lelarge <guillaume(at)lelarge(dot)info>, Melvin Davidson <melvin6925(at)gmail(dot)com>, Adrian Klaver <adrian(dot)klaver(at)aklaver(dot)com>, "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: question |
Date: | 2015-10-17 11:36:36 |
Message-ID: | CA+bJJbwYmMzv-yU1m-EDsOUf1u9sXJyxg4Wihh5J40KR_DDwbA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hi Anj:
On Thu, Oct 15, 2015 at 10:35 PM, anj patnaik <patna73(at)gmail(dot)com> wrote:
>
> I will experiment with -Fc (custom). The file is already growing very
> large.
>
I do not recall if you've already provided them, but, how large? I mean,
if you have a large database, backup will take time and ocupy space, you
may be approaching.
As a benchmark, for intellectual satisfaction, the smallest backup you can
get is probably text format and then compress with the more agressive
option of your favorite compressor, but this is normally useless except for
very special cases.
My recomendation will be to use plain Fc for a backup, this is what I do.
Sometimes tweaking the -Z after tests, but normally in my experience the
default level is right. bear in mind DB disk tend to be expensive, backup
disks can be much cheaper and, unless you are keeping a lot of them,
backups are smaller. As an example, we have a server pair ( replicated ),
with a couple short stroked fast disks for the database and a couple
'normal' disks for first line backup in each one. Normal disks are about
ten times database disks, and easily fit 30 backups, so we can backup to
one of them, copy to the seconds, and replicate to the other in the server
pair, just using Fc. This because backup compress indexes quite well, by
reducing them to a 'CREATE INDEX', and the copy format used inside is
generally more compact than the layout used on disk ( which needs free
space, is framed and lot of other things ) and compresses quite well too.
If you are pressed for backup size, you normally have very special needs or
do not have a properly dimensioned system. But, to say anything more you
will need to provide some numbers ( how big is your database and backups,
how fast are you disks and things like this. In this case maybe hints can
be provided.
>
> I am running this:
> ./pg_dump -t RECORDER -Fc postgres | gzip > /tmp/dump
>
In this case gzip is useless. -Fc already uses gzip compression at the
member level. Doing it with -Z0 and then gzipping will gain you a bit,
obvously, as it will compress everything as a single chunk ( except if you
manage to hit a pathological case ), but I doubt it will be significant .
As pointed in other places you can use Fc+Z0 and then compress with a
'better' compresor you may get a smaller file, or get it faster, but
remember you'll need to decompress it before restoring ( this does not
happen for text format, as you can do stream restore, but the restore
options for text format are limited, it's an all or nothing approach unless
you are really fluent in stream editors ).
Francisco Olarte.
From | Date | Subject | |
---|---|---|---|
Next Message | Francisco Olarte | 2015-10-17 11:42:06 | Re: question |
Previous Message | Karsten Hilbert | 2015-10-17 11:13:12 | Re: ID column naming convention |