From: | Peter Geoghegan <pg(at)bowt(dot)ie> |
---|---|
To: | Masahiro Ikeda <ikedamsh(at)oss(dot)nttdata(dot)com> |
Cc: | Tomas Vondra <tomas(at)vondra(dot)me>, Masahiro(dot)Ikeda(at)nttdata(dot)com, pgsql-hackers(at)lists(dot)postgresql(dot)org, Masao(dot)Fujii(at)nttdata(dot)com |
Subject: | Re: Adding skip scan (including MDAM style range skip scan) to nbtree |
Date: | 2024-11-20 19:40:05 |
Message-ID: | CAH2-Wzk49UhvCTnhTz9N=A-+kYjr12aWVyqjDCoFbA2YV67gKA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Wed, Nov 20, 2024 at 4:04 AM Masahiro Ikeda <ikedamsh(at)oss(dot)nttdata(dot)com> wrote:
> Thanks for your quick response!
Attached is v16. This is similar to v15, but the new
v16-0003-Fix-regressions* patch to fix the regressions is much less
buggy, and easier to understand.
Unlike v15, the experimental patch in v16 doesn't change anything
about which index pages are read by the scan -- not even in corner
cases. It is 100% limited to fixing the CPU overhead of maintaining
skip arrays uselessly *within* a leaf page. My extensive test suite
passes; it no longer shows any changes in "Buffers: N" for any of the
EXPLAIN (ANALYZE, BUFFERS) ... output that the tests look at. This is
what I'd expect.
I think that it will make sense to commit this patch as a separate
commit, immediately after skip scan itself is committed. It makes it
clear that, at least in theory, the new v16-0003-Fix-regressions*
patch doesn't change any behavior that's visible to code outside of
_bt_readpage/_bt_checkkeys/_bt_advance_array_keys.
> I didn't come up with the idea. At first glance, your idea seems good
> for all cases.
My approach of conditioning the new "beyondskip" behavior on
"has_skip_array && beyond_end_advance" is at least a good start.
The idea behind conditioning this behavior on having at least one
beyond_end_advance array advancement is pretty simple: in practice
that almost never happens during skip scans that actually end up
skipping (either via another _bt_first that redesends the index, or
via skipping "within the page" using the
_bt_checkkeys_look_ahead/pstate->skip mechanism). So that definitely
seems like a good general heuristic. It just isn't sufficient on its
own, as you have shown.
> Actually, test.sql shows a performance improvement, and the performance
> is almost the same as the master's seqscan. To be precise, the master's
> performance is 10-20% better than the v15 patch because the seqscan is
> executed in parallel. However, the v15 patch is twice as fast as when
> seqscan is not executed in parallel.
I think that that's a good result, overall.
Bear in mind that a case such as this might receive a big performance
benefit if it can skip only once or twice. It's almost impossible to
model those kinds of effects within the optimizer's cost model, but
they're still important effects.
FWIW, I notice that your "t" test table is 35 MB, whereas its t_idx
index is 21 MB. That's not very realistic (the index size is usually a
smaller fraction of the table size than we see here), which probably
partly explains why the planner likes parallel sequential scan for
this.
> However, I found that there is still a problematic case when I read your
> patch. IIUC, beyondskip becomes true only if the tuple's id2 is greater
> than the scan key value. Therefore, the following query (see
> test_for_v15.sql)
> still degrades.
As usual, you are correct. :-)
> I’m reporting the above result, though you might already be aware of the
> issue.
Thanks!
I have an experimental fix in mind for this case. One not-very-good
way to fix this new problem seems to work:
diff --git a/src/backend/access/nbtree/nbtutils.c
b/src/backend/access/nbtree/nbtutils.c
index b70b58e0c..ddae5f2a1 100644
--- a/src/backend/access/nbtree/nbtutils.c
+++ b/src/backend/access/nbtree/nbtutils.c
@@ -3640,7 +3640,7 @@ _bt_advance_array_keys(IndexScanDesc scan,
BTReadPageState *pstate,
* for skip scan, and stop maintaining the scan's skip arrays until we
* reach the page's finaltup, if any.
*/
- if (has_skip_array && beyond_end_advance &&
+ if (has_skip_array && !all_required_satisfied &&
!has_required_opposite_direction_skip && pstate->finaltup)
pstate->beyondskip = true;
However, a small number of my test cases now fail. And (I assume) this
approach has certain downsides on leaf pages where we're now too quick
to stop maintaining skip arrays.
What I really need to do next is to provide a vigorous argument for
why the new pstate->beyondskip behavior is correct. I'm already
imposing restrictions on range skip arrays in v16 of the patch --
that's what the "!has_required_opposite_direction_skip" portion of the
test is about. But it still feels too ad-hoc.
I'm a little worried that these restrictions on range skip arrays will
themselves be the problem for some other kind of query. Imagine a
query like this:
SELECT * FROM t WHERE id1 BETWEEN 0 AND 1_000_000 AND id2 = 1
This is probably going to be regressed due to the aforementioned
"!has_required_opposite_direction_skip" restriction. Right now I don't
fully understand what restrictions are truly necessary, though. More
research is needed.
I think for v17 I'll properly fix all of the regressions that you've
complained about so far, including the most recent "SELECT * FROM t
WHERE id2 = 1_000_000" regression. Hopefully the best fix for this
other "WHERE id1 BETWEEN 0 AND 1_000_000 AND id2 = 1" regression will
become clearer once I get that far. What do you think?
> Yes, I agree. Therefore, even if I can't think of a way to prevent
> regressions
> or if I can only think of improvements that would significantly
> sacrifice the
> benefits of skip scan, I would still like to report any regression cases
> if
> they occur.
You're right, of course. It might make sense to accept some very small
regressions. But not if we can basically avoid all regressions. Which
may well be an attainable goal.
> There may be a better way, such as the new idea you suggested, and I
> think there
> is room for discussion regarding how far we should go in handling
> regressions,
> regardless of whether we choose to accept regressions or sacrifice the
> benefits of
> skip scan to address them.
There are definitely lots more options to address these regressions.
For example, we could have the planner hint that it thinks that skip
scan won't be a good idea, without that actually changing the basic
choices that nbtree makes about which pages it needs to scan (only how
to scan each individual leaf page). Or, we could remember that the
previous page used "pstate-> beyondskip" each time _bt_readpage reads
another page. I could probably think of 2 or 3 more ideas like that,
if I had to.
However, the problem is not a lack of ideas IMV. The important
trade-off is likely to be the trade-off between how effectively we can
avoid these regressions versus how much complexity each approach
imposes. My guess is that complexity is more likely to impose limits
on us than overall feasibility.
--
Peter Geoghegan
Attachment | Content-Type | Size |
---|---|---|
v16-0003-Fix-regressions-in-unsympathetic-skip-scan-cases.patch | application/octet-stream | 12.8 KB |
v16-0001-Show-index-search-count-in-EXPLAIN-ANALYZE.patch | application/octet-stream | 52.5 KB |
v16-0002-Add-skip-scan-to-nbtree.patch | application/octet-stream | 174.1 KB |
From | Date | Subject | |
---|---|---|---|
Next Message | Bruce Momjian | 2024-11-20 19:47:58 | Re: pg_dump --no-comments confusion |
Previous Message | Guillaume Lelarge | 2024-11-20 19:30:08 | Re: Proposals for EXPLAIN: rename ANALYZE to EXECUTE and extend VERBOSE |