Re: Adding skip scan (including MDAM style range skip scan) to nbtree

From: Peter Geoghegan <pg(at)bowt(dot)ie>
To: Masahiro Ikeda <ikedamsh(at)oss(dot)nttdata(dot)com>
Cc: Tomas Vondra <tomas(at)vondra(dot)me>, Masahiro(dot)Ikeda(at)nttdata(dot)com, pgsql-hackers(at)lists(dot)postgresql(dot)org, Masao(dot)Fujii(at)nttdata(dot)com
Subject: Re: Adding skip scan (including MDAM style range skip scan) to nbtree
Date: 2025-01-03 19:43:45
Message-ID: CAH2-WzmoVuvE_6h-HUuExu1KK1gKX5Hpbw0mUbWJcaqmK8VDVg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Wed, Dec 11, 2024 at 2:13 PM Peter Geoghegan <pg(at)bowt(dot)ie> wrote:
> Attached is v19.

I now attach v20. This revision simplifies the "skipskip"
optimization, from the v20-0003-* patch. We now apply it on every page
that isn't the primitive index scan's first leaf page read (during
skip scans) -- we'll no longer activate it midway through scanning a
leaf page within _bt_readpage.

The newly revised "skipskip" optimization seems to get the regressions
down to only a 5% - 10% increase in runtime across a wide variety of
unsympathetic cases -- I'm now validating performance against a test
suite based on the adversarial cases presented by Masahiro Ikeda on
this thread. Although I think that I'll end up tuning the "skipskip"
mechanism some more (I may have been too conservative in marginal
cases that actually do benefit from skipping), I deem these
regressions to be acceptable. They're only seen in the most
unsympathetic cases, where an omitted leading column has groupings of
no more than about 50 index tuples, making skipping pretty hopeless.

I knew from the outset that the hardest part of this project would be
avoiding regressions in highly unsympathetic cases. The regressions
that are still there seem very difficult to minimize any further; the
overhead that remains comes from the simple need to maintain the
scan's skip arrays once per page, before leaving the page. Once a scan
decides to apply the "skipskip" optimization, it tends to stick with
it for all future leaf pages -- leaving only the overhead of checking
the high key while advancing the scan's arrays. I've cut just about
all that that I can reasonably cut from the hot code paths that are at
issue with the regressed cases.

It's important to have a sense of the context that these regressions
are seen in. We can reasonably hope that the optimizer wouldn't pick a
plan like this in the first place, and/or hope that the user would
create an appropriate index to avoid an inherently inefficient full
index scan (a scan like the one that I've regressed). Plus the
overhead only gets this high for index-only scans, where index
traversal costs will naturally dominate. If a user's query really is
made slower to the same degree (5% - 10%), then the user probably
doesn't consider the query very performance critical. They're unlikely
to notice the 5% - 10% regression -- creating the right index for the
job will make the query multiple times faster, at a minimum.

The break-even point where we should prefer to skip is pretty close to
a policy of simply always skipping -- especially with the "skipskip"
optimization/patch in place. That makes it seem unlikely that we could
do much better by giving the optimizer a greater role in things. I
just don't think that the optimizer has sufficiently accurate
information about the characteristics of the index to get anything
close to the level of precision that is required to avoid regressions.
For example, I see many queries that are ~5x faster than an equivalent
full index scan, despite only ever skipping over every second leaf
page -- there are still big savings in CPU costs for such cases. We
see big speedups in these "marginal" cases -- speedups that are really
hard to model using the available statistics. If a reliable cost
function could be built, then it would be very sensitive to its
parameters, and would exhibit very nonlinear behavior. In general,
something that behaves like that seems unlikely to ever be truly
reliable.

--
Peter Geoghegan

Attachment Content-Type Size
v20-0001-Show-index-search-count-in-EXPLAIN-ANALYZE.patch application/octet-stream 52.8 KB
v20-0003-Lower-the-overhead-of-nbtree-runtime-skip-checks.patch application/octet-stream 21.2 KB
v20-0002-Add-skip-scan-to-nbtree.patch application/octet-stream 174.3 KB

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Sami Imseih 2025-01-03 19:46:25 Re: Vacuum statistics
Previous Message Robert Treat 2025-01-03 19:39:03 Re: Adding OLD/NEW support to RETURNING