From: | Michael Paquier <michael(at)paquier(dot)xyz> |
---|---|
To: | Heikki Linnakangas <hlinnaka(at)iki(dot)fi> |
Cc: | Andrey Borodin <x4mmm(at)yandex-team(dot)ru>, Justin Pryzby <pryzby(at)telsasoft(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)enterprisedb(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>, Thomas Munro <thomas(dot)munro(at)gmail(dot)com>, Peter Geoghegan <pg(at)bowt(dot)ie>, Andres Freund <andres(at)anarazel(dot)de> |
Subject: | Re: Different compression methods for FPI |
Date: | 2021-06-17 01:12:04 |
Message-ID: | YMqhZF6SrqccZk4p@paquier.xyz |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Wed, Jun 16, 2021 at 11:49:51AM +0300, Heikki Linnakangas wrote:
> Hmm, do we currently compress each block in a WAL record separately, for
> records that contain multiple full-page images? That could make a big
> difference e.g. for GiST index build that WAL-logs 32 pages in each record.
> If it helps the compression, we should probably start WAL-logging b-tree
> index build in larger batches, too.
Each block is compressed alone, see XLogCompressBackupBlock() in
XLogRecordAssemble() where we loop through each block. Compressing a
group of blocks would not be difficult (the refactoring may be
trickier than it looks) but I am wondering how we should treat the
case where we finish by not compressing a group of blocks as there is
a safety fallback to not enforce a failure if a block cannot be
compressed. Should we move back to the compression of individual
blocks or just log all those pages uncompressed without their holes?
I really don't expect a group of blocks to not be compressed, just
being a bit paranoid here about the fallback we'd better have.
--
Michael
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2021-06-17 01:14:25 | Re: A qsort template |
Previous Message | Tom Lane | 2021-06-17 01:10:25 | Re: Improving isolationtester's data output |