Re: Parallel append plan instability/randomness

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Jim Finnerty <jfinnert(at)amazon(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Parallel append plan instability/randomness
Date: 2018-01-08 16:57:58
Message-ID: 4160.1515430678@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Robert Haas <robertmhaas(at)gmail(dot)com> writes:
> On Mon, Jan 8, 2018 at 11:42 AM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>> More generally, I wonder if
>> it wouldn't be better to implement this behavior at runtime rather
>> than plan time.

> Ignoring some details around partial vs. non-partial plans, that's
> pretty much what we ARE doing, but to make it efficient, we sort the
> paths at plan time so that those choices are easy to make at runtime.
> If we didn't do that, we could have every worker sort the paths at
> execution time instead, or have the first process to arrive perform
> the sort and store the results in shared memory while everyone else
> waits, but that seems to be more complicated and less efficient, so I
> don't understand why you're proposing it.

The main bit of info we'd have at runtime that we lack at plan time is
certainty about the number of available workers. Maybe that doesn't
really add anything useful to the order in which subplans would be doled
out; not sure.

regards, tom lane

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Ken Huffman 2018-01-08 16:59:43 RE: PL/Python SD dict wiped?
Previous Message Robert Haas 2018-01-08 16:48:59 Re: Parallel append plan instability/randomness