From: | "ktm(at)rice(dot)edu" <ktm(at)rice(dot)edu> |
---|---|
To: | Andres Freund <andres(at)2ndquadrant(dot)com> |
Cc: | Bruce Momjian <bruce(at)momjian(dot)us>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, Jeff Davis <pgsql(at)j-davis(dot)com>, Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>, Fujii Masao <masao(dot)fujii(at)gmail(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Compression of full-page-writes |
Date: | 2015-01-02 16:15:57 |
Message-ID: | 20150102161557.GA17646@aart.rice.edu |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Fri, Jan 02, 2015 at 01:01:06PM +0100, Andres Freund wrote:
> On 2014-12-31 16:09:31 -0500, Bruce Momjian wrote:
> > I still don't understand the value of adding WAL compression, given the
> > high CPU usage and minimal performance improvement. The only big
> > advantage is WAL storage, but again, why not just compress the WAL file
> > when archiving.
>
> before: pg_xlog is 800GB
> after: pg_xlog is 600GB.
>
> I'm damned sure that many people would be happy with that, even if the
> *per backend* overhead is a bit higher. And no, compression of archives
> when archiving helps *zap* with that (streaming, wal_keep_segments,
> checkpoint_timeout). As discussed before.
>
> Greetings,
>
> Andres Freund
>
+1
On an I/O constrained system assuming 50:50 table:WAL I/O, in the case
above you can process 100GB of transaction data at the cost of a bit
more CPU.
Regards,
Ken
From | Date | Subject | |
---|---|---|---|
Next Message | Bruce Momjian | 2015-01-02 16:52:42 | Re: Compression of full-page-writes |
Previous Message | Kevin Grittner | 2015-01-02 15:04:19 | Re: TODO : Allow parallel cores to be used by vacuumdb [ WIP ] |