From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
Cc: | PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
Subject: | Re: Observations in Parallel Append |
Date: | 2017-12-27 06:39:39 |
Message-ID: | CA+TgmobUYcS4+_QPwsyUseDhBebqtmU_oL=rk3MKgBK-w52U9w@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Sun, Dec 24, 2017 at 8:37 PM, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
> On Sun, Dec 24, 2017 at 12:06 PM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
>> On Fri, Dec 22, 2017 at 6:18 AM, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
>>
>>> Also, don't we need to use parallel_divisor for partial paths instead
>>> of non-partial paths as those will be actually distributed among
>>> workers?
>>
>> Uh, that seems backwards to me. We're trying to estimate the average
>> number of rows per worker.
>
> Okay, but is it appropriate to use the parallel_divisor? The
> parallel_divisor means the contribution of all the workers (+
> leader_contribution) whereas for non-partial paths there will be
> always only the subset of workers which will operate on them.
> Consider a case with one non-partial subpath and five partial subpaths
> with six as parallel_divisor, now the current code will try to divide
> the rows of non-partial subpath with respect to six workers. However,
> in reality, there will always be one worker which will execute that
> path.
That's true, of course, but if five processes each return 0 rows and
the sixth process returns 600 rows, the average number of rows per
process is 100, not anything else.
Here's one way to look at it. Suppose there is a table with 1000
partitions. If we do a Parallel Append over a Parallel Seq Scan per
partition, we will come up with a row estimate by summing the
estimated row count across all partitions and dividing by the
parallel_divisor. This will give us some answer. If we instead do a
Parallel Append over a Seq Scan per partition, we should really come
up with the *same* estimate. The only way to do that is to also
divide by the parallel_divisor in this case.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Haas | 2017-12-27 06:47:59 | Re: Getting rid of "tuple concurrently updated" elog()s with concurrent DDLs (at least ALTER TABLE) |
Previous Message | Robert Haas | 2017-12-27 06:30:08 | Re: Should we nonblocking open FIFO files in COPY? |