Re: Adding skip scan (including MDAM style range skip scan) to nbtree

From: Masahiro Ikeda <ikedamsh(at)oss(dot)nttdata(dot)com>
To: Peter Geoghegan <pg(at)bowt(dot)ie>
Cc: Tomas Vondra <tomas(at)vondra(dot)me>, Masahiro(dot)Ikeda(at)nttdata(dot)com, pgsql-hackers(at)lists(dot)postgresql(dot)org, Masao(dot)Fujii(at)nttdata(dot)com
Subject: Re: Adding skip scan (including MDAM style range skip scan) to nbtree
Date: 2024-11-29 02:33:47
Message-ID: 96f70d0fd5193c6b2812d041d6d7a703@oss.nttdata.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On 2024-11-26 07:32, Peter Geoghegan wrote:
> On Mon, Nov 25, 2024 at 4:07 AM Masahiro Ikeda
> <ikedamsh(at)oss(dot)nttdata(dot)com> wrote:
>> One thing I am concerned about is that it reduces the cases where the
>> optimization of _bt_checkkeys_look_ahead() works effectively, as the
>> approach
>> skips the skip immediately on the first occurrence per page.
>
> I noticed that with the recent v17 revision of the patch, my original
> MDAM paper "sales_mdam_paper" test case (the complicated query in the
> introductory email of this thread) was about 2x slower. That's just
> not okay, obviously. But the issue was relatively easy to fix: it was
> fixed by making _bt_readpage not apply the "skipskip" optimization
> when on the first page for the current primitive index scan -- we
> already do this with the "precheck" optimization, so it's natural to
> do it with the "skipskip" optimization as well.
>
> The "sales_mdam_paper" test case involves thousands of primitive index
> scans that each access only one leaf page. But each leaf page returns
> 2 non-adjoining tuples, with quite a few non-matching tuples "in
> between" the matching tuples. There is one matching tuple for "store =
> 200", and another for "store = 250" -- and there's non-matching stores
> 201 - 249 between these two, which we want _bt_checkkeys_look_ahead to
> skip over. This is exactly the kind of case where the
> _bt_checkkeys_look_ahead() optimization is expected to help.

Great! Your new approach strikes a good balance between the trade-offs
of "skipskip" and "look ahead" optimization. Although the regression
case
I provided seems to be a corner case, your regression case is realistic
and should be addressed.

>> Again, the above results are provided for reference, as I believe that
>> many users prioritize stability and I'd like to take your new
>> approach.
>
> Adversarial cases specifically designed to "make the patch look bad"
> are definitely useful review feedback. Ideally, the patch will be 100%
> free of regressions -- no matter how unlikely (or even silly) they may
> seem. I always prefer to not have to rely on anybody's opinion of what
> is likely or unlikely. :-)
>
> A quick test seems to show that this particular regression is more or
> less fixed by v18. As you said, the _bt_checkkeys_look_ahead stuff is
> the issue here (same with the MDAM sales query). You should confirm
> that the issue has actually been fixed, though.

Thanks to your new patch, I have confirmed that the issue is fixed.

I have no comments on the new patches. If I find any new regression
cases, I'll report them.

Regards,
--
Masahiro Ikeda
NTT DATA CORPORATION

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message David Rowley 2024-11-29 02:39:03 Re: Remove useless GROUP BY columns considering unique index
Previous Message Zhijie Hou (Fujitsu) 2024-11-29 02:24:37 RE: Conflict detection for update_deleted in logical replication