From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Dimitrios Apostolou <jimis(at)gmx(dot)net> |
Cc: | pgsql-general(at)lists(dot)postgresql(dot)org |
Subject: | Re: SELECT DISTINCT chooses parallel seqscan instead of indexscan on huge table with 1000 partitions |
Date: | 2024-05-10 20:22:48 |
Message-ID: | 1629463.1715372568@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Dimitrios Apostolou <jimis(at)gmx(dot)net> writes:
> Further digging into this simple query, if I force the non-parallel plan
> by setting max_parallel_workers_per_gather TO 0, I see that the query
> planner comes up with a cost much higher:
> Limit (cost=363.84..1134528847.47 rows=10 width=4)
> -> Unique (cost=363.84..22690570036.41 rows=200 width=4)
> -> Append (cost=363.84..22527480551.58 rows=65235793929 width=4)
> ...
> The total cost on the 1st line (cost=363.84..1134528847.47) has a much
> higher upper limit than the total cost when
> max_parallel_workers_per_gather is 4 (cost=853891608.79..853891608.99).
> This explains the planner's choice. But I wonder why the cost estimation
> is so far away from reality.
I'd say the blame lies with that (probably-default) estimate of
just 200 distinct rows. That means the planner expects to have
to read about 5% (10/200) of the tables to get the result, and
that's making fast-start plans look bad.
Possibly an explicit ANALYZE on the partitioned table would help.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Dimitrios Apostolou | 2024-05-11 01:10:50 | Re: SELECT DISTINCT chooses parallel seqscan instead of indexscan on huge table with 1000 partitions |
Previous Message | Dimitrios Apostolou | 2024-05-10 19:35:57 | Re: SELECT DISTINCT chooses parallel seqscan instead of indexscan on huge table with 1000 partitions |