From: | Dimitri Fontaine <dfontaine(at)hi-media(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Joachim Wieland <joe(at)mcknight(dot)de>, Greg Stark <gsstark(at)mit(dot)edu>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: a faster compression algorithm for pg_dump |
Date: | 2010-04-14 08:33:53 |
Message-ID: | 87iq7uv3f2.fsf@hi-media-techno.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> writes:
> Well, what we *really* need is a convincing argument that it's worth
> taking some risk for. I find that not obvious. You can pipe the output
> of pg_dump into your-choice-of-compressor, for example, and that gets
> you the ability to spread the work across multiple CPUs in addition to
> eliminating legal risk to the PG project.
Well, I like -Fc and playing with the catalog to restore in staging
environments only the "interesting" data. I even automated all the
catalog mangling in pg_staging so that I just have to setup which
schema I want, with only the DDL or with the DATA too.
The fun is when you want to exclude functions that are used in
triggers based on the schema where the function lives, not the
trigger, BTW, but that's another story.
So yes having both -Fc and another compression facility than plain gzip
would be good news. And benefiting from a better compression in TOAST
would be good too I guess (small size hit, lots faster, would fit).
Summary : my convincing argument is using the dumps for efficiently
preparing development and testing environments from production data,
thanks to -Fc. That includes skipping data to restore.
Regards,
--
dim
From | Date | Subject | |
---|---|---|---|
Next Message | Simon Riggs | 2010-04-14 10:29:27 | Re: Hot Standby: Startup at shutdown checkpoint |
Previous Message | Fujii Masao | 2010-04-14 08:32:14 | Re: Remaining Streaming Replication Open Items |