Re: Adding skip scan (including MDAM style range skip scan) to nbtree

From: Peter Geoghegan <pg(at)bowt(dot)ie>
To: Masahiro Ikeda <ikedamsh(at)oss(dot)nttdata(dot)com>
Cc: Tomas Vondra <tomas(at)vondra(dot)me>, Masahiro(dot)Ikeda(at)nttdata(dot)com, pgsql-hackers(at)lists(dot)postgresql(dot)org, Masao(dot)Fujii(at)nttdata(dot)com
Subject: Re: Adding skip scan (including MDAM style range skip scan) to nbtree
Date: 2024-11-22 22:34:14
Message-ID: CAH2-Wz=SXWMOMvaTZjzv96+QaZFsHZ_H=f-qP-xSrFCWxKwC3g@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Fri, Nov 22, 2024 at 4:17 AM Masahiro Ikeda <ikedamsh(at)oss(dot)nttdata(dot)com> wrote:
> Though the change fixes the assertion error in 'make check', there are
> still
> cases where the number of return values is incorrect. I would also like
> to
> continue investigating.

Thanks for taking a look at that! I've come up with a better approach,
though (sorry about how quickly this keeps changing!). See the
attached new revision, v17.

I'm now calling the new optimization from the third patch the
"skipskip" optimization. I believe that v17 will fix those bugs that
you saw -- let me know if those are gone now. It also greatly improves
the situation with regressions (more on that later).

New approach
------------

Before now, I was trying to preserve the usual invariants that we have
for the scan's array keys: the array keys must "track the progress" of
the scan through the index's key space -- that's what those
_bt_tuple_before_array_skeys precondition and postcondition asserts
inside _bt_advance_array_keys verify for us (these assertions have
proven very valuable, both now and during the earlier Postgres 17
nbtree SAOP project). My original approach (to managing the overhead
of maintaining skip arrays in adversarial/unsympathetic cases) was
overly complicated, inflexible, and brittle.

It's simpler (much simpler) to allow the scan to temporarily forget
about the invariants: once the "skipskip" optimization is activated,
we don't care about the rules that require that "the array keys track
progress through the key space" -- not until _bt_readpage actually
reaches the page's finaltup. Then _bt_readpage can handle the problem
using a trick that we already use elsewhere (e.g., in btrestrpos):
_bt_readpage just calls _bt_start_array_keys to get the array keys to
their initial positions (in the current scan direction), before
calling _bt_checkkeys for finaltup in the usual way.

Under this scheme, the _bt_checkkeys call for finaltup will definitely
call _bt_advance_array_keys to advance the array keys to the correct
place (the correct place when scanning forward is ">= finaltup in the
key space"). The truly essential thing is that we never accidentally
allow the array keys to "prematurely get ahead of the scan's tuple in
the keyspace" -- that leads to wrong answers to queries once we reach
the next page. But the arrays can be allowed to temporarily remain
*before* the scan's tuples without consequence (it's safe when the
skipskip optimization is in effect, at least, since the _bt_checkkeys
calls treat everything as a non-required key, and so
_bt_checkkeys/_bt_advance_array_keys will never expect any non-skipped
SAOP arrays to tells them anything about the scan's progress through
the index's key space -- there'll be no unsafe "cur_elem_trig" binary
searches, for example).

This approach also allowed me to restore all of the assertions in
nbtutils.c to their original form. That was important to me -- those
assertions have saved me quite a few times.

Regressions, performance improvements
-------------------------------------

As I touched on briefly already, the new approach is also
significantly *faster* than the master branch in certain "adversarial"
cases (this is explained in the next paragraph). Overall, all of the
cases that were unacceptably regressed before now, that I know of
(e.g., all of the cases that you brought to my attention so far,
Masahiro) are either significantly faster, or only very slightly
slower. The regressions that we still have in v17 are probably
acceptable -- though clearly I still have a lot of performance
validation work to do before reaching a final conclusion about
regressions.

I also attach a variant of your test_for_v15.sql test case, Masahiro.
I used this to do some basic performance validation of this latest
version of the patch -- it'll give you a general sense of where we are
with regressions, and will also show those "adversarial" cases that
end up significantly faster than the master branch with v17.
Significant improvements are sometimes seen because the "skipskip"
optimization replaces the requiredOppositeDirOnly optimization (see
commit e0b1ee17dc for context). We can do better than the existing
requiredOppositeDirOnly optimizations because we can skip more
individual scan key comparisons. For example, with this query:

SELECT * FROM t WHERE id1 BETWEEN 0 AND 1_000_000 AND id2 = 1_000_000

This goes from taking ~25ms with a warm cache on master, to only
taking ~17ms on v17 of the patch series. I wasn't really trying to
make anything faster, here -- it's a nice bonus.

There are 3 scan keys involved here, when the query is run on the
master branch: "id1 >= 0", "id1 <= 1_000_000", and "id2 = 1_000_000".
The existing requiredOppositeDirOnly optimization doesn't work because
only one page will ever have its pstate->first set to 'true' within
_bt_readpage -- that's due to "id2 = 1_000_000" (a non-required scan
key) only rarely evaluating to true. Whereas with skip scan (and its
"skipskip" optimization), there are only 2 scan keys (well, sort of):
the range skip array scan key on "id1", plus the "id2 = 1_000_000"
scan key. We'll be able to skip over the range skip array scan key
entirely with the "skipskip" optimization, so the range skip array's
lower_bound and upper_bound "subsidiary" scan keys won't need to be
evaluated more than once per affected leaf page. In other words, we'll
still need to evaluate the "id2 = 1_000_000" against every tuple on
every leaf page -- but we don't need to use the >= or <=
subsidiary-to-skip-array scan keys (except once per page, to establish
that the optimization is safe).

--
Peter Geoghegan

Attachment Content-Type Size
v17-0001-Show-index-search-count-in-EXPLAIN-ANALYZE.patch application/octet-stream 52.5 KB
v17-0002-Add-skip-scan-to-nbtree.patch application/octet-stream 174.1 KB
v17-0003-Add-skipskip-nbtree-skip-scan-optimization.patch application/octet-stream 18.6 KB
test_for_v17.sql application/octet-stream 6.5 KB

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Masahiko Sawada 2024-11-22 22:37:25 Re: UUID v7
Previous Message Heikki Linnakangas 2024-11-22 21:58:36 Re: Interrupts vs signals