Re: [PATCH] xlogreader: do not read a file block twice

From: Michael Paquier <michael(at)paquier(dot)xyz>
To: Arthur Zakirov <a(dot)zakirov(at)postgrespro(dot)ru>
Cc: Grigory Smolkin <g(dot)smolkin(at)postgrespro(dot)ru>, pgsql-hackers(at)lists(dot)postgresql(dot)org
Subject: Re: [PATCH] xlogreader: do not read a file block twice
Date: 2019-02-14 06:51:56
Message-ID: 20190214065156.GE2366@paquier.xyz
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Tue, Feb 12, 2019 at 11:44:14AM +0300, Arthur Zakirov wrote:
> Of course. Agree, it may be a non trivial case. Added as a bug fix:
> https://commitfest.postgresql.org/22/1994/

I have been looking at the patch, and I agree that the current coding
is a bit crazy. If the wanted data has already been read, it makes
little sense to require reading it again if the size requested by the
caller of ReadPageInternal() exactly equals what has been read
already, and that's what the code is doing.

Now I don't actually agree that this qualifies as a bug fix. As
things stand, a page may finish by being more than once if what has
been read previously equals what is requested, however this does not
prevent the code to work correctly. The performance gain is also
heavily dependent on the callback reading a page and the way the WAL
reader is used. How do you actually read WAL pages in your own
plugin with compressed data? It begins by reading a full page once,
then it moves on to a per-record read after making sure that the page
has been read?
--
Michael

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Michael Paquier 2019-02-14 07:13:13 Re: Incorrect visibility test function assigned to snapshot
Previous Message Peter Geoghegan 2019-02-14 06:47:03 Re: Making all nbtree entries unique by having heap TIDs participate in comparisons