Re: Adding skip scan (including MDAM style range skip scan) to nbtree

From: Peter Geoghegan <pg(at)bowt(dot)ie>
To: Masahiro(dot)Ikeda(at)nttdata(dot)com
Cc: pgsql-hackers(at)lists(dot)postgresql(dot)org, Masao(dot)Fujii(at)nttdata(dot)com
Subject: Re: Adding skip scan (including MDAM style range skip scan) to nbtree
Date: 2024-09-04 16:52:57
Message-ID: CAH2-Wz=PKR6rB7qbx+Vnd7eqeB5VTcrW=iJvAsTsKbdG+kW_UA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Mon, Jul 15, 2024 at 2:34 PM Peter Geoghegan <pg(at)bowt(dot)ie> wrote:
> On Fri, Jul 12, 2024 at 1:19 AM <Masahiro(dot)Ikeda(at)nttdata(dot)com> wrote:
> > I found the cost is estimated to much higher if the number of skipped attributes
> > is more than two. Is it expected behavior?
>
> Yes and no.
>
> Honestly, the current costing is just placeholder code. It is totally
> inadequate. I'm not surprised that you found problems with it. I just
> didn't put much work into it, because I didn't really know what to do.

Attached is v6, which finally does something sensible in btcostestimate.

v6 is also the first version that supports parallel index scans that
can skip. This works by extending the approach taken by scans with
regular SAOP arrays to work with skip arrays. We need to serialize and
deserialize the current array keys in shared memory, as datums -- we
cannot just use simple BTArrayKeyInfo.cur_elem offsets with skip
arrays.

v6 also includes the patch that shows "Index Searches" in EXPLAIN
ANALYZE output, just because it's convenient when testing the patch.
This has been independently submitted as
https://commitfest.postgresql.org/49/5183/, so probably doesn't need
review here.

v6 is the first version of the patch that is basically feature
complete. I only have one big open item left: I must still fix certain
regressions seen with queries that are very unfavorable for skip scan,
where the CPU cost (but not I/O cost) of maintaining skip arrays slows
things down. Overall, I'm making fast progress here.

Back to the topic of the btcostestimate/planner changes. The rest of
the email is a discussion of the cost model.

The planner changes probably still have some problems, but all of the
obvious problems have been fixed by v6. I found it useful to focus on
making the cost model not have any obvious problems instead of trying
to make it match a purely theoretical ideal. For example, your
(Ikeda-san's) complaint about the "Index Scan using idx_id1_id2_id3 on
public.test" test case having too high a cost (higher than the cost of
a slower sequential scan) has been fixed. It's now about 3x cheaper
than the sequential scan, since we're actually paying attention to
ndistinct in v6.

Just like when we cost SAOP arrays on HEAD, skip arrays are costed by
pessimistically multiplying together the estimated number of array
elements for all the scan's arrays, without trying to account for
correlation between index columns. Being pessimistic about
correlations like this is often wrong, but that still seems like the
best bias we could have, all things considered. Plus it's nothing new.

Range style skip arrays require a slightly more complicated approach
to estimating the number of array elements: costing applies a
selectivity estimate, taken from the associated index column's
inequality keys, and applies that estimate to ndistinct itself. That
way the cost of a range skip array is lower than an
otherwise-equivalent simple skip array case (we prorate ndistinct with
skip arrays). More importantly, the cost of more selectivity ranges is
lower than the cost of less selective ranges. There is also a bias
here: we don't account for skew in ndistinct. That's probably OK,
because at least it's a bias *against* skip scan.

The new cost model does not specifically try to account for how scans
will behave when no skipping should be expected at all -- cases where
a so-called "skip scan" degenerates into a full index scan. In theory,
we should be costing these scans the same as before, since there has
been no change in runtime behavior. Overall, the cost of full index
scans with very many distinct prefix column values goes down by quite
a bit -- the cost is something like 1/3 lower in typical cases.

The problem with preserving the cost model from HEAD for these
unfavorable cases for skip scan is that I don't feel that I understand
the existing behavior. In practice the revised costing seems to be a
somewhat more accurate predictor of the actual runtime of queries.
Another problem is that I can't see a good way to make the behavior
continuous when ndistinct starts small and grows so large that we
should expect a true full index scan. (As I mentioned at the start of
this email, there are unfixed regressions for these unfavorable cases,
so I'm basing this analysis on the "set skipscan_prefix_cols = 0"
behavior rather than the current default patch behavior to correct for
that. This behavior matches HEAD with a full index scan, and should
match the default behavior in a future version of the skip scan
patch.)

--
Peter Geoghegan

Attachment Content-Type Size
v6-0001-Show-index-search-count-in-EXPLAIN-ANALYZE.patch application/octet-stream 45.5 KB
v6-0003-Refactor-handling-of-nbtree-array-redundancies.patch application/octet-stream 18.7 KB
v6-0002-Normalize-nbtree-truncated-high-key-array-behavio.patch application/octet-stream 12.9 KB
v6-0004-Add-skip-scan-to-nbtree.patch application/octet-stream 154.7 KB

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Ranier Vilela 2024-09-04 17:10:28 Avoid possible dereference null pointer (src/bin/pg_dump/pg_dump.c)
Previous Message Ranier Vilela 2024-09-04 16:50:24 Avoid dead code (contrib/pg_visibility/pg_visibility.c)