Re: Question of Parallel Hash Join on TPC-H Benchmark

From: Ba Jinsheng <bajinsheng(at)u(dot)nus(dot)edu>
To: Zhang Mingli <zmlpostgres(at)gmail(dot)com>, "pgsql-bugs(at)lists(dot)postgresql(dot)org" <pgsql-bugs(at)lists(dot)postgresql(dot)org>, Tomas Vondra <tomas(at)vondra(dot)me>
Subject: Re: Question of Parallel Hash Join on TPC-H Benchmark
Date: 2024-10-10 16:52:27
Message-ID: SEZPR06MB6494BD3DDF5B03700032C2C98A782@SEZPR06MB6494.apcprd06.prod.outlook.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-bugs

Thanks for your replies!

I configured these variables and did get a worse query plan:

QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=57899.45..57899.45 rows=1 width=195) (actual time=4261.283..4261.806 rows=100 loops=1)
-> Sort (cost=57899.45..57899.45 rows=1 width=195) (actual time=4261.281..4261.795 rows=100 loops=1)
Sort Key: supplier.s_acctbal DESC, nation.n_name, supplier.s_name, part.p_partkey
Sort Method: top-N heapsort Memory: 71kB
-> Hash Join (cost=29025.99..57899.44 rows=1 width=195) (actual time=281.174..4261.118 rows=485 loops=1)
Hash Cond: ((part.p_partkey = partsupp.ps_partkey) AND ((SubPlan 1) = partsupp.ps_supplycost))
-> Gather (cost=0.00..5378.00 rows=804 width=30) (actual time=0.607..0.807 rows=826 loops=1)
Workers Planned: 2
Workers Launched: 2
-> Parallel Seq Scan on part (cost=0.00..5378.00 rows=335 width=30) (actual time=0.156..17.662 rows=275 loops=3)
Filter: (((p_type)::text ~~ '%STEEL'::text) AND (p_size = 30))
Rows Removed by Filter: 66391
-> Hash (cost=22718.99..22718.99 rows=160000 width=175) (actual time=108.521..108.833 rows=160240 loops=1)
Buckets: 65536 Batches: 8 Memory Usage: 4648kB
-> Gather (cost=297.99..22718.99 rows=160000 width=175) (actual time=5.148..53.343 rows=160240 loops=1)
Workers Planned: 2
Workers Launched: 2
-> Parallel Hash Join (cost=297.99..22718.99 rows=66667 width=175) (actual time=2.076..71.222 rows=53413 loops=3)
Hash Cond: (partsupp.ps_suppkey = supplier.s_suppkey)
-> Parallel Seq Scan on partsupp (cost=0.00..20893.33 rows=333333 width=14) (actual time=0.036..28.325 rows=266667 loops=3)
-> Parallel Hash (cost=287.58..287.58 rows=833 width=169) (actual time=1.600..1.602 rows=668 loops=3)
Buckets: 2048 Batches: 1 Memory Usage: 464kB
-> Parallel Hash Join (cost=2.28..287.58 rows=833 width=169) (actual time=0.047..1.870 rows=1002 loops=2)
Hash Cond: (supplier.s_nationkey = nation.n_nationkey)
-> Parallel Seq Scan on supplier (cost=0.00..264.67 rows=4167 width=144) (actual time=0.004..0.518 rows=5000 loops=2)
-> Parallel Hash (cost=2.25..2.25 rows=3 width=33) (actual time=0.020..0.023 rows=2 loops=2)
Buckets: 1024 Batches: 1 Memory Usage: 40kB
-> Parallel Hash Join (cost=1.05..2.25 rows=3 width=33) (actual time=0.028..0.040 rows=5 loops=1)
Hash Cond: (nation.n_regionkey = region.r_regionkey)
-> Parallel Seq Scan on nation (cost=0.00..1.15 rows=15 width=37) (actual time=0.002..0.004 rows=25 loops=1)
-> Parallel Hash (cost=1.04..1.04 rows=1 width=4) (actual time=0.010..0.010 rows=1 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 40kB
-> Parallel Seq Scan on region (cost=0.00..1.04 rows=1 width=4) (actual time=0.005..0.006 rows=1 loops=1)
Filter: (r_name = 'ASIA'::bpchar)
Rows Removed by Filter: 4
SubPlan 1
-> Aggregate (cost=48.67..48.68 rows=1 width=32) (actual time=3.144..3.144 rows=1 loops=1311)
-> Nested Loop (cost=0.85..48.67 rows=1 width=6) (actual time=1.306..3.143 rows=1 loops=1311)
Join Filter: (region_1.r_regionkey = nation_1.n_regionkey)
Rows Removed by Join Filter: 3
-> Gather (cost=0.00..1.04 rows=1 width=4) (actual time=0.647..3.118 rows=1 loops=1311)
Workers Planned: 1
Workers Launched: 1
-> Parallel Seq Scan on region region_1 (cost=0.00..1.04 rows=1 width=4) (actual time=0.001..0.002 rows=0 loops=2622)
Filter: (r_name = 'ASIA'::bpchar)
Rows Removed by Filter: 2
-> Nested Loop (cost=0.85..47.58 rows=4 width=10) (actual time=0.010..0.024 rows=4 loops=1311)
-> Nested Loop (cost=0.71..46.96 rows=4 width=10) (actual time=0.008..0.017 rows=4 loops=1311)
-> Index Scan using partsupp_pkey on partsupp partsupp_1 (cost=0.42..13.75 rows=4 width=10) (actual time=0.005..0.006 rows=4 loops=1311)
Index Cond: (ps_partkey = part.p_partkey)
-> Index Scan using supplier_pkey on supplier supplier_1 (cost=0.29..8.30 rows=1 width=8) (actual time=0.002..0.002 rows=1 loops=5244)
Index Cond: (s_suppkey = partsupp_1.ps_suppkey)
-> Index Scan using nation_pkey on nation nation_1 (cost=0.14..0.16 rows=1 width=8) (actual time=0.001..0.001 rows=1 loops=5244)
Index Cond: (n_nationkey = supplier_1.s_nationkey)
Planning Time: 10.079 ms
Execution Time: 4262.062 ms
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

However, I found that the Hash Join in the fifth line is still not in parallel, and other Hash Join are in parallel. This is not what I intended.

I tried another thing. I reset every configuration to default and apply the following patch:

diff --git a/src/backend/optimizer/util/clauses.c b/src/backend/optimizer/util/clauses.c
index b4e085e9d4..a0dc032d79 100644
--- a/src/backend/optimizer/util/clauses.c
+++ b/src/backend/optimizer/util/clauses.c
@@ -803,8 +803,6 @@ max_parallel_hazard_test(char proparallel, max_parallel_hazard_context *context)
Assert(context->max_hazard != PROPARALLEL_UNSAFE);
context->max_hazard = proparallel;
/* done if we are not expecting any unsafe functions */
- if (context->max_interesting == proparallel)
- return true;
break;
case PROPARALLEL_UNSAFE:
context->max_hazard = proparallel;

Then I can get a more efficient query plan. Compared to the original execution under default configurations, the estimated cost is reduced around 37% and the execution time is reduced around 50%.

QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=42019.25..42019.25 rows=1 width=195) (actual time=134.698..142.258 rows=100 loops=1)
-> Sort (cost=42019.25..42019.25 rows=1 width=195) (actual time=134.697..142.252 rows=100 loops=1)
Sort Key: supplier.s_acctbal DESC, nation.n_name, supplier.s_name, part.p_partkey
Sort Method: top-N heapsort Memory: 69kB
-> Gather (cost=26845.97..42019.24 rows=1 width=195) (actual time=116.843..142.014 rows=485 loops=1)
Workers Planned: 2
Workers Launched: 2
-> Parallel Hash Join (cost=25845.97..41019.14 rows=1 width=195) (actual time=120.427..130.569 rows=162 loops=3)
Hash Cond: ((part.p_partkey = partsupp.ps_partkey) AND ((SubPlan 1) = partsupp.ps_supplycost))
-> Parallel Seq Scan on part (cost=0.00..5378.00 rows=335 width=30) (actual time=0.102..9.739 rows=275 loops=3)
Filter: (((p_type)::text ~~ '%STEEL'::text) AND (p_size = 30))
Rows Removed by Filter: 66391
-> Parallel Hash (cost=23217.97..23217.97 rows=66667 width=175) (actual time=97.833..97.837 rows=53413 loops=3)
Buckets: 65536 Batches: 8 Memory Usage: 4800kB
-> Hash Join (cost=407.96..23217.97 rows=66667 width=175) (actual time=4.596..69.280 rows=53413 loops=3)
Hash Cond: (partsupp.ps_suppkey = supplier.s_suppkey)
-> Parallel Seq Scan on partsupp (cost=0.00..20893.33 rows=333333 width=14) (actual time=0.039..26.690 rows=266667 loops=3)
-> Hash (cost=382.96..382.96 rows=2000 width=169) (actual time=4.543..4.545 rows=2003 loops=3)
Buckets: 2048 Batches: 1 Memory Usage: 413kB
-> Hash Join (cost=2.46..382.96 rows=2000 width=169) (actual time=0.106..3.658 rows=2003 loops=3)
Hash Cond: (supplier.s_nationkey = nation.n_nationkey)
-> Seq Scan on supplier (cost=0.00..323.00 rows=10000 width=144) (actual time=0.015..1.071 rows=10000 loops=3)
-> Hash (cost=2.40..2.40 rows=5 width=33) (actual time=0.070..0.072 rows=5 loops=3)
Buckets: 1024 Batches: 1 Memory Usage: 9kB
-> Hash Join (cost=1.07..2.40 rows=5 width=33) (actual time=0.061..0.067 rows=5 loops=3)
Hash Cond: (nation.n_regionkey = region.r_regionkey)
-> Seq Scan on nation (cost=0.00..1.25 rows=25 width=37) (actual time=0.009..0.011 rows=25 loops=3)
-> Hash (cost=1.06..1.06 rows=1 width=4) (actual time=0.031..0.031 rows=1 loops=3)
Buckets: 1024 Batches: 1 Memory Usage: 9kB
-> Seq Scan on region (cost=0.00..1.06 rows=1 width=4) (actual time=0.014..0.016 rows=1 loops=3)
Filter: (r_name = 'ASIA'::bpchar)
Rows Removed by Filter: 4
SubPlan 1
-> Aggregate (cost=48.70..48.71 rows=1 width=32) (actual time=0.017..0.017 rows=1 loops=3810)
-> Nested Loop (cost=0.85..48.70 rows=1 width=6) (actual time=0.012..0.016 rows=1 loops=3810)
Join Filter: (region_1.r_regionkey = nation_1.n_regionkey)
Rows Removed by Join Filter: 3
-> Seq Scan on region region_1 (cost=0.00..1.06 rows=1 width=4) (actual time=0.001..0.001 rows=1 loops=3810)
Filter: (r_name = 'ASIA'::bpchar)
Rows Removed by Filter: 4
-> Nested Loop (cost=0.85..47.58 rows=4 width=10) (actual time=0.006..0.015 rows=4 loops=3810)
-> Nested Loop (cost=0.71..46.96 rows=4 width=10) (actual time=0.005..0.011 rows=4 loops=3810)
-> Index Scan using partsupp_pkey on partsupp partsupp_1 (cost=0.42..13.75 rows=4 width=10) (actual time=0.003..0.004 rows=4 loops=3810)
Index Cond: (ps_partkey = part.p_partkey)
-> Index Scan using supplier_pkey on supplier supplier_1 (cost=0.29..8.30 rows=1 width=8) (actual time=0.001..0.001 rows=1 loops=15240)
Index Cond: (s_suppkey = partsupp_1.ps_suppkey)
-> Index Scan using nation_pkey on nation nation_1 (cost=0.14..0.16 rows=1 width=8) (actual time=0.001..0.001 rows=1 loops=15240)
Index Cond: (n_nationkey = supplier_1.s_nationkey)
Planning Time: 2.046 ms
Execution Time: 142.437 ms
(50 rows)
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Best regards,

Jinsheng Ba

________________________________
From: Zhang Mingli <zmlpostgres(at)gmail(dot)com>
Sent: Thursday, October 10, 2024 4:47 PM
To: Ba Jinsheng <bajinsheng(at)u(dot)nus(dot)edu>; pgsql-bugs(at)lists(dot)postgresql(dot)org <pgsql-bugs(at)lists(dot)postgresql(dot)org>; Tomas Vondra <tomas(at)vondra(dot)me>
Subject: Re: Question of Parallel Hash Join on TPC-H Benchmark

- External Email -

Hi,

Zhang Mingli
www.hashdata.xyz
On Oct 10, 2024 at 22:41 +0800, Tomas Vondra <tomas(at)vondra(dot)me>, wrote:

You can try setting parallel_setup_cost and parallel_tuple_cost to 0,
and reducing min_parallel_{table,index}_scan_size. That should force a
parallel plan, and you'll see if parallelism is helpful.
In addition to this, it seems you’r eager a parallel-aware HashJoin.
Check if enable_parallel_hash = on, an explain(verbose) will show related GUCs.
Notice: This email is generated from the account of an NUS alumnus. Contents, views, and opinions therein are solely those of the sender.

In response to

Responses

Browse pgsql-bugs by date

  From Date Subject
Next Message Andrei Lepikhov 2024-10-11 02:43:52 Re: BUG #18652: Planner can not find pathkey item to sort for query with expression and expression index
Previous Message Zhang Mingli 2024-10-10 14:47:00 Re: Question of Parallel Hash Join on TPC-H Benchmark