Re: Adding skip scan (including MDAM style range skip scan) to nbtree

From: Matthias van de Meent <boekewurm+postgres(at)gmail(dot)com>
To: Peter Geoghegan <pg(at)bowt(dot)ie>, Tomas Vondra <tomas(at)vondra(dot)me>
Cc: Masahiro(dot)Ikeda(at)nttdata(dot)com, pgsql-hackers(at)lists(dot)postgresql(dot)org, Masao(dot)Fujii(at)nttdata(dot)com
Subject: Re: Adding skip scan (including MDAM style range skip scan) to nbtree
Date: 2024-09-12 14:49:24
Message-ID: CAEze2WgLavbhzUZBp_=-ObikngqK=tad1Et8b8W-kwb8gQJjPg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Mon, 9 Sept 2024 at 21:55, Peter Geoghegan <pg(at)bowt(dot)ie> wrote:
>
> On Sat, Sep 7, 2024 at 11:27 AM Tomas Vondra <tomas(at)vondra(dot)me> wrote:
> > I started looking at this patch today.
>
> Thanks for taking a look!
>
> > The first thing I usually do for
> > new patches is a stress test, so I did a simple script that generates
> > random table and runs a random query with IN() clause with various
> > configs (parallel query, index-only scans, ...). And it got stuck on a
> > parallel query pretty quick.
>
> I can reproduce this locally, without too much difficulty.
> Unfortunately, this is a bug on master/Postgres 17. Some kind of issue
> in my commit 5bf748b8.
[...]
> In short, one or two details of how backends call _bt_parallel_seize
> to pick up BTPARALLEL_NEED_PRIMSCAN work likely need to be rethought.

Thanks to Peter for the description, that helped me debug the issue. I
think I found a fix for the issue: regression tests for 811af978
consistently got stuck on my macbook before the attached patch 0001,
after applying that this patch they completed just fine.

The issue to me seems to be the following:

Only _bt_first can start a new primitive scan, so _bt_parallel_seize
only assigns a new primscan if the process is indeed in _bt_first (as
provided with _b_p_s(first=true)). All other backends that hit a
NEED_PRIMSCAN state will currently pause until a backend in _bt_first
does the next primitive scan.

A backend that hasn't requested the next primitive scan will likely
hit _bt_parallel_seize from code other than _bt_first, thus pausing.
If this is the leader process, it'll stop consuming tuples from
follower processes.

If the follower process finds a new primary scan is required after
finishing reading results from a page, it will first request a new
primitive scan, and only then start producing the tuples.

As such, we can have a follower process that just finished reading a
page, had issued a new primitive scan, and now tries to send tuples to
its primary process before getting back to _bt_first, but the its
primary process won't acknowledge any tuples because it's waiting for
that process to start the next primitive scan - now we're deadlocked.

---

The fix in 0001 is relatively simple: we stop backends from waiting
for a concurrent backend to resolve the NEED_PRIMSCAN condition, and
instead move our local state machine so that we'll hit _bt_first
ourselves, so that we may be able to start the next primitive scan.
Also attached is 0002, which adds tracking of responsible backends to
parallel btree scans, thus allowing us to assert we're never waiting
for our own process to move the state forward. I found this patch
helpful while working on solving this issue, even if it wouldn't have
found the bug as reported.

Kind regards,

Matthias van de Meent
Neon (https://neon.tech)

Attachment Content-Type Size
v1-0001-Fix-stuck-parallel-btree-scans.patch application/octet-stream 2.6 KB
v1-0002-nbtree-add-tracking-of-processing-responsibilitie.patch application/octet-stream 4.2 KB

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Tomas Vondra 2024-09-12 14:57:18 Re: Incremental Sort Cost Estimation Instability
Previous Message Nathan Bossart 2024-09-12 13:46:15 Re: [PATCH] Support Int64 GUCs