From: | Gavin Flower <GavinFlower(at)archidevsys(dot)co(dot)nz> |
---|---|
To: | pgsql-general(at)lists(dot)postgresql(dot)org |
Subject: | Re: Why is Postgres only using 8 cores for partitioned count? [Parallel Append] |
Date: | 2021-02-14 21:16:04 |
Message-ID: | 82bead5f-79b0-432e-f64e-0b3fd3cc51f1@archidevsys.co.nz |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On 14/02/2021 22:47, David Rowley wrote:
> On Sun, 14 Feb 2021 at 13:15, Seamus Abshere
> <sabshere(at)alumni(dot)princeton(dot)edu> wrote:
>> The comment from Robert says: (src/backend/optimizer/path/allpaths.c)
>>
>> /*
>> * If the use of parallel append is permitted, always request at least
>> * log2(# of children) workers.
>>
>> In my case, every partition takes 1 second to scan, I have 64 cores, I have 64 partitions, and the wall time is 8 seconds with 8 workers.
>>
>> I assume that if it it planned significantly more workers (16? 32? even 64?), it would get significantly faster (even accounting for transaction cost). So why doesn't it ask for more? Note that I've set max_parallel_workers=512, etc. (postgresql.conf in my first message).
> There's perhaps an argument for allowing ALTER TABLE <partitioned
> table> SET (parallel_workers=N); to be set on partitioned tables, but
> we don't currently allow it.
[...]
> David
Just wondering why there is a hard coded limit.
While I agree it might be good to be able specify the number of workers,
sure it would be possible to derive a suitable default based on the
number of effective processors available?
Cheers,
Gavin
From | Date | Subject | |
---|---|---|---|
Next Message | Ravi__Krishna | 2021-02-15 01:37:22 | Re: How to post to this mailing list from a web based interface |
Previous Message | Seamus Abshere | 2021-02-14 12:41:11 | Re: Why is Postgres only using 8 cores for partitioned count? [Parallel Append] |