Re: Streaming read-ready sequential scan code

From: Thomas Munro <thomas(dot)munro(at)gmail(dot)com>
To: Alexander Lakhin <exclusion(at)gmail(dot)com>
Cc: Andres Freund <andres(at)anarazel(dot)de>, Melanie Plageman <melanieplageman(at)gmail(dot)com>, David Rowley <dgrowleyml(at)gmail(dot)com>, Heikki Linnakangas <hlinnaka(at)iki(dot)fi>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Streaming read-ready sequential scan code
Date: 2024-05-18 04:47:20
Message-ID: CA+hUKGKpw8KBL_V4yhCMAbi6jf5rLyFz-K2MrwuDTHTcUytSVw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Sat, May 18, 2024 at 11:30 AM Thomas Munro <thomas(dot)munro(at)gmail(dot)com> wrote:
> Andres happened to have TPC-DS handy, and reproduced that regression
> in q15. We tried some stuff and figured out that it requires
> parallel_leader_participation=on, ie that this looks like some kind of
> parallel fairness and/or timing problem. It seems to be a question of
> which worker finishes up processing matching rows, and the leader gets
> a ~10ms head start but may be a little more greedy with the new
> streaming code. He tried reordering the table contents and then saw
> 17 beat 16. So for q15, initial indications are that this isn't a
> fundamental regression, it's just a test that is sensitive to some
> arbitrary conditions.
>
> I'll try to figure out some more details about that, ie is it being
> too greedy on small-ish tables,

After more debugging, we learned a lot more things...

1. That query produces spectacularly bad estimates, so we finish up
having to increase the number of buckets in a parallel hash join many
times. That is quite interesting, but unrelated to new code.
2. Parallel hash join is quite slow at negotiating an increase in the
number of hash bucket, if all of the input tuples are being filtered
out by quals, because of the choice of where workers check for
PHJ_GROWTH_NEED_MORE_BUCKETS. That could be improved quite easily I
think. I have put that on my todo list 'cause that's also my code,
but it's not a new issue it's just one that is now highlighted...
3. This bit of read_stream.c is exacerbating unfairness in the
underlying scan, so that 1 and 2 come together and produce a nasty
slowdown, which goes away if you change it like so:

- BlockNumber blocknums[16];
+ BlockNumber blocknums[1];

I will follow up after some more study.

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Jacob Burroughs 2024-05-18 05:18:00 Re: libpq compression (part 3)
Previous Message Michael Paquier 2024-05-18 02:11:10 Re: Propagate sanity checks of ProcessUtility() to standard_ProcessUtility()?