From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Joachim Wieland <joe(at)mcknight(dot)de> |
Cc: | Greg Stark <gsstark(at)mit(dot)edu>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: a faster compression algorithm for pg_dump |
Date: | 2010-04-13 19:03:58 |
Message-ID: | 8302.1271185438@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Joachim Wieland <joe(at)mcknight(dot)de> writes:
> If we still cannot do this, then what I am asking is: What does the
> project need to be able to at least link against such a compression
> algorithm?
Well, what we *really* need is a convincing argument that it's worth
taking some risk for. I find that not obvious. You can pipe the output
of pg_dump into your-choice-of-compressor, for example, and that gets
you the ability to spread the work across multiple CPUs in addition to
eliminating legal risk to the PG project. And in any case the general
impression seems to be that the main dump-speed bottleneck is on the
backend side not in pg_dump's compression.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2010-04-13 19:36:00 | Re: psql's \d display of unique index vs. constraint |
Previous Message | Alvaro Herrera | 2010-04-13 19:00:25 | Re: Timezone matching script (win32) |