Re: Adding skip scan (including MDAM style range skip scan) to nbtree

From: Peter Geoghegan <pg(at)bowt(dot)ie>
To: Tomas Vondra <tomas(at)vondra(dot)me>
Cc: Masahiro(dot)Ikeda(at)nttdata(dot)com, pgsql-hackers(at)lists(dot)postgresql(dot)org, Masao(dot)Fujii(at)nttdata(dot)com
Subject: Re: Adding skip scan (including MDAM style range skip scan) to nbtree
Date: 2024-09-20 14:59:41
Message-ID: CAH2-Wzm3FQDMGa61-8DRjM13O=TKsWbFDM2cOdBSK=1E=DcmhA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Fri, Sep 20, 2024 at 10:07 AM Tomas Vondra <tomas(at)vondra(dot)me> wrote:
> Yes, I think backpatching to 17 would be fine. I'd be worried about
> maybe disrupting some monitoring in production systems, but for 17 that
> shouldn't be a problem yet. So fine with me.

I'll commit minimal changes to _bt_first that at least make the
counters consistent, then. I'll do so soon.

> FWIW I wonder how likely is it that someone has some sort of alerting
> tied to this counter. I'd bet few people do. It's probably more about a
> couple people looking at explain plans, but they'll be confused even if
> we change that only starting with 17.

On 17 the behavior in this area is totally different, either way.

> Ah, OK. So we do probe the index like this. I was under the impression
> we don't do that. But yeah, this makes sense.

Well, we don't have *explicit* next-key probes. If you think of values
like "Aardvark" + infinitesimal as just another array value (albeit
one that requires a little special handling in _bt_first), then there
are no explicit probes. There are no true special cases required.

Maybe this sounds like a very academic point. I don't think that it
is, though. Bear in mind that even when _bt_first searches for index
tuples matching a value like "Aardvark" + infinitesimal, there's some
chance that _bt_search will return a leaf page with tuples that the
index scan ultimately returns. And so there really is no "separate
explicit probe" of the kind the MDAM paper contemplates.

When this happens, we won't get any exact matches for the sentinel
search value, but there could still be matches for (say) "WHERE a =
'Abacus' AND b = 55" on that same leaf page. In general, repositioning
the scan to later "within" the 'Abacus' index tuples might not be
required -- our initial position (based on the sentinel search key)
could be "close enough". This outcome is more likely to happen if the
query happened to be written "WHERE b = 1", rather than "WHERE b =
55".

> Yes, it does. Most of my confusion was caused by my belief that we can't
> probe the index for the next value without "incrementing" the current
> value, but that was a silly idea.

It's not a silly idea, I think. Technically that understanding is
fairly accurate -- we often *do* have to "increment" to get to the
next value (though reading the next value from an index tuple and then
repositioning using it with later/less significant scan keys is the
other possibility).

Incrementing is always possible, even without skip support, because we
can always fall back on +infinitesimal style sentinel values (AKA
SK_BT_NEXTPRIOR values). That's the definitional sleight-of-hand that
allows _bt_advance_array_keys to not have to think about skip arrays
as a special case, regardless of whether or not they happen to have
skip support.

> > It might be possible to add skip support for text, but there wouldn't
> > be much point.
> >
>
> Stupid question - so why does it make sense for types like int? There
> can also be a lot of values between the current and the next value, so
> why would that be very different from "incrementing" a text value?

Not a stupid question at all. You're right; it'd be the same.

Obviously, there are at least some workloads (probably most) where any
int columns will contain values that are more or less fully
contiguous. I also expect there to be some workloads where int columns
appear in B-Tree indexes that contain values with large gaps between
neighboring values (e.g., because the integers are hash values). We'll
always use skip support for any omitted prefix int column (same with
any opclass that offers skip support), but we can only expect to see a
benefit in the former "dense" cases -- never in the latter "sparse"
cases.

The MDAM paper talks about an adaptive strategy for dense columns and
sparse columns. I don't see any point in that, and assume that it's
down to some kind of implementation deficiencies in NonStop SQL back
in the 1990s. I can just always use skip support in the hope that
integer column data will turn out to be "sparse" because there's no
downside to being optimistic about it. The access patterns are exactly
the same as they'd be with skip support disabled.

My "academic point" about not having *explicit* next-key probes might
make more sense now. This is the thing that makes it okay to always be
optimistic about types with skip support containing "dense" data.

FWIW I actually have skip support for the UUID opclass. I implemented
it to have test coverage for pass-by-reference types in certain code
paths, but it's otherwise I don't expect it to be useful -- in
practice all UUID columns contain "sparse" data. There's still no real
downside to it, though. (I wouldn't try to do it with text because
it'd be much harder to implement skip support correctly, especially
with collated text.)

--
Peter Geoghegan

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Yugo Nagata 2024-09-20 15:35:44 pgbench: Improve result outputs related to failed transactinos
Previous Message Tomas Vondra 2024-09-20 14:56:48 Re: Why mention to Oracle ?