From: | Heikki Linnakangas <hlinnakangas(at)vmware(dot)com> |
---|---|
To: | Andres Freund <andres(at)2ndquadrant(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Simon Riggs <simon(at)2ndquadrant(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Compression of full-page-writes |
Date: | 2014-12-08 20:33:31 |
Message-ID: | 54860B1B.2030401@vmware.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 12/08/2014 09:21 PM, Andres Freund wrote:
> I still think that just compressing the whole record if it's above a
> certain size is going to be better than compressing individual
> parts. Michael argued thta that'd be complicated because of the varying
> size of the required 'scratch space'. I don't buy that argument
> though. It's easy enough to simply compress all the data in some fixed
> chunk size. I.e. always compress 64kb in one go. If there's more
> compress that independently.
Doing it in fixed-size chunks doesn't help - you have to hold onto the
compressed data until it's written to the WAL buffers.
But you could just allocate a "large enough" scratch buffer, and give up
if it doesn't fit. If the compressed data doesn't fit in e.g. 3 * 8kb,
it didn't compress very well, so there's probably no point in
compressing it anyway. Now, an exception to that might be a record that
contains something else than page data, like a commit record with
millions of subxids, but I think we could live with not compressing
those, even though it would be beneficial to do so.
- Heikki
From | Date | Subject | |
---|---|---|---|
Next Message | Jim Nasby | 2014-12-08 21:08:43 | Re: [v9.5] Custom Plan API |
Previous Message | Robert Haas | 2014-12-08 20:15:56 | Re: On partitioning |