From: | Rushabh Lathia <rushabh(dot)lathia(at)gmail(dot)com> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, Kuntal Ghosh <kuntalghosh(dot)2007(at)gmail(dot)com>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, Haribabu Kommi <kommi(dot)haribabu(at)gmail(dot)com>, Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>, Neha Sharma <neha(dot)sharma(at)enterprisedb(dot)com> |
Subject: | Re: Gather Merge |
Date: | 2017-02-16 13:43:32 |
Message-ID: | CAGPqQf2164iV6k-_M75qEZWiCfRarA_SKSmHjc0Uh1rEf5RJrA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Here are the latest tpch run on top of commit
93e6e40574bccf9c6f33c520a4189d3e98e2fd1f
(which includes the parallel index scan commit).
Settings:
work_mem = 64MB
max_parallel_workers_per_gather = 4
tpch sf = 20
Query picking gather merge path:
Query 2: 17678.570 - 16766.051
Query 3: 44357.977 - 44001.607
Query 4: 7763.992 - 7100.267
Query 5: 21828.874 - 21437.217
Query 12: 19067.318 - 20218.332
Query 17: 113895.084 - 104935.094
Query 18: 230650.193 - 191607.031
(attaching queries output file).
When work_mem is higher, tpch query choose the hash aggregate plan. In
some of the query if I force gather merge with higher work_mem setting
results
with GM are much better (example: query 9). It seems something wrong
with the sort or hashaggregate costing due to which planner is unable to
pick
GM in some cases (thats something need more investigation, other then this
thread).
Here are some of the other queries which performs 2x faster better with
gather merge, even with the higher work_mem settings.
Example:
postgres=# show work_mem ;
work_mem
----------
128MB
(1 row)
postgres=# show max_parallel_workers_per_gather ;
max_parallel_workers_per_gather
---------------------------------
4
(1 row)
postgres=# explain analyze select * from customer, orders where o_custkey =
c_custkey order by c_name;
QUERY
PLAN
-----------------------------------------------------------------------------------------------------------------------------------------------
Gather Merge (cost=391019.23..929812.42 rows=4499894 width=274) (actual
time=21958.057..33453.440 rows=4500000 loops=1)
Workers Planned: 4
Workers Launched: 4
-> Sort (cost=390019.17..392831.61 rows=1124974 width=274) (actual
time=21023.906..22476.398 rows=900000 loops=5)
Sort Key: customer.c_name
Sort Method: external merge Disk: 270000kB
-> Hash Join (cost=21245.00..130833.13 rows=1124974 width=274)
(actual time=442.298..3300.924 rows=900000 loops=5)
Hash Cond: (orders.o_custkey = customer.c_custkey)
-> Parallel Seq Scan on orders (cost=0.00..94119.74
rows=1124974 width=111) (actual time=0.066..1026.268 rows=900000 loops=5)
-> Hash (cost=15620.00..15620.00 rows=450000 width=163)
(actual time=436.946..436.946 rows=450000 loops=5)
Buckets: 524288 Batches: 1 Memory Usage: 91930kB
-> Seq Scan on customer (cost=0.00..15620.00
rows=450000 width=163) (actual time=0.041..95.679 rows=450000 loops=5)
Planning time: 1.698 ms
Execution time: 33866.866 ms
postgres=# set enable_gathermerge = off;
SET
postgres=# explain analyze select * from customer, orders where o_custkey =
c_custkey order by c_name;
QUERY
PLAN
-------------------------------------------------------------------------------------------------------------------------------------
Sort (cost=1292720.11..1303969.84 rows=4499894 width=274) (actual
time=62937.054..70417.760 rows=4500000 loops=1)
Sort Key: customer.c_name
Sort Method: external merge Disk: 1298616kB
-> Hash Join (cost=21245.00..210987.48 rows=4499894 width=274) (actual
time=390.660..7373.668 rows=4500000 loops=1)
Hash Cond: (orders.o_custkey = customer.c_custkey)
-> Seq Scan on orders (cost=0.00..127868.94 rows=4499894
width=111) (actual time=0.120..1386.200 rows=4500000 loops=1)
-> Hash (cost=15620.00..15620.00 rows=450000 width=163) (actual
time=389.610..389.610 rows=450000 loops=1)
Buckets: 524288 Batches: 1 Memory Usage: 91930kB
-> Seq Scan on customer (cost=0.00..15620.00 rows=450000
width=163) (actual time=0.016..85.376 rows=450000 loops=1)
Planning time: 1.155 ms
Execution time: 70869.090 ms
(11 rows)
-- Force parallel sequential scan.
postgres=# set parallel_tuple_cost = 0.01;
SET
postgres=# explain analyze select * from customer, orders where o_custkey =
c_custkey order by c_name;
QUERY
PLAN
----------------------------------------------------------------------------------------------------------------------------------------------
Sort (cost=1258564.69..1269814.43 rows=4499894 width=274) (actual
time=59070.986..66452.565 rows=4500000 loops=1)
Sort Key: customer.c_name
Sort Method: external merge Disk: 1298600kB
-> Gather (cost=22245.00..176832.07 rows=4499894 width=274) (actual
time=353.397..3914.851 rows=4500000 loops=1)
Workers Planned: 4
Workers Launched: 4
-> Hash Join (cost=21245.00..130833.13 rows=1124974 width=274)
(actual time=358.574..2004.654 rows=900000 loops=5)
Hash Cond: (orders.o_custkey = customer.c_custkey)
-> Parallel Seq Scan on orders (cost=0.00..94119.74
rows=1124974 width=111) (actual time=0.096..293.176 rows=900000 loops=5)
-> Hash (cost=15620.00..15620.00 rows=450000 width=163)
(actual time=356.567..356.567 rows=450000 loops=5)
Buckets: 524288 Batches: 1 Memory Usage: 91930kB
-> Seq Scan on customer (cost=0.00..15620.00
rows=450000 width=163) (actual time=0.038..88.918 rows=450000 loops=5)
Planning time: 0.768 ms
Execution time: 66871.398 ms
(14 rows)
Another query:
postgres=# explain analyze select * from pgbench_accounts where filler like
'%foo%' order by aid;
QUERY
PLAN
------------------------------------------------------------------------------------------------------------------------------------------------
Gather Merge (cost=47108.00..70432.79 rows=194804 width=97) (actual
time=267.708..397.309 rows=200000 loops=1)
Workers Planned: 4
Workers Launched: 4
-> Sort (cost=46107.94..46229.69 rows=48701 width=97) (actual
time=260.969..268.848 rows=40000 loops=5)
Sort Key: aid
Sort Method: quicksort Memory: 6861kB
-> Parallel Seq Scan on pgbench_accounts (cost=0.00..42316.16
rows=48701 width=97) (actual time=210.499..225.161 rows=40000 loops=5)
Filter: (filler ~~ '%foo%'::text)
Rows Removed by Filter: 360000
Planning time: 0.120 ms
Execution time: 412.632 ms
(11 rows)
postgres=# set enable_gathermerge = off;
SET
postgres=# explain analyze select * from pgbench_accounts where filler like
'%foo%' order by aid;
QUERY
PLAN
-----------------------------------------------------------------------------------------------------------------------------------
Sort (cost=78181.90..78668.91 rows=194805 width=97) (actual
time=905.688..929.926 rows=200000 loops=1)
Sort Key: aid
Sort Method: quicksort Memory: 35832kB
-> Seq Scan on pgbench_accounts (cost=0.00..61066.65 rows=194805
width=97) (actual time=772.789..835.104 rows=200000 loops=1)
Filter: (filler ~~ '%foo%'::text)
Rows Removed by Filter: 1800000
Planning time: 0.151 ms
Execution time: 943.824 ms
(8 rows)
I think that with some of the other parallel operator patches like parallel
bitmap scan, parallel hash join, etc., GM will get pick more often into tpch
queries.
Regards,
On Mon, Feb 6, 2017 at 2:41 PM, Rushabh Lathia <rushabh(dot)lathia(at)gmail(dot)com>
wrote:
>
> Thanks Neha for the test LCOV report.
>
> I run the tpch on scale 10 with the latest patch and with latest code
> up to 1st Feb (f1169ab501ce90e035a7c6489013a1d4c250ac92).
>
> - max_worker_processes = DEFAULT (8)
> - max_parallel_workers_per_gather = 4
> - Cold cache environment is ensured. With every query execution - server is
> stopped and also OS caches were dropped.
> - power2 machine with 512GB of RAM
>
> Here are the results: I did the three run and taken median. First
> timing is without patch and 2nd is with GM.
>
> Query 3: 45035.425 - 43935.497
> Query 4: 7098.259 - 6651.498
> Query 5: 37114.338 - 37605.579
> Query 9: 87544.144 - 44617.138
> Query 10: 43810.497 - 37133.404
> Query 12: 20309.993 - 19639.213
> Query 15: 61837.415 - 60240.762
> Query 17: 134121.961 - 116943.542
> Query 18: 248157.735 - 193463.311
> Query 20: 203448.405 - 166733.112
>
> Also attaching the output of TPCH runs.
>
>
>
> On Fri, Feb 3, 2017 at 5:56 PM, Neha Sharma <neha(dot)sharma(at)enterprisedb(dot)com>
> wrote:
>
>> Hi,
>>
>> I have done some testing with the latest patch
>>
>> 1)./pgbench postgres -i -F 100 -s 20
>> 2) update pgbench_accounts set filler = 'foo' where aid%10 = 0;
>> 3) vacuum analyze pgbench_accounts;
>> 4) set max_parallel_workers_per_gather = 4;
>> 5) set max_parallel_workers = 4;
>>
>> *Machine Configuration :-*
>> RAM :- 16GB
>> VCPU :- 8
>> Disk :- 640 GB
>>
>> Test case script with out-file attached.
>>
>> *LCOV Report :- *
>>
>> File Names Line Coverage without Test cases Line Coverage with Test cases Function
>> Coverage without Test cases Function Coverage with Test cases
>> src/backend/executor/nodeGatherMerge.c 0.0 % 92.3 % 0.0 % 92.3 %
>> src/backend/commands/explain.c 65.5 % 68.4 % 81.7 % 85.0 %
>> src/backend/executor/execProcnode.c 92.50% 95.1 % 100% 100.0 %
>> src/backend/nodes/copyfuncs.c 77.2 % 77.6 % 73.0 % 73.4 %
>> src/backend/nodes/outfuncs.c 32.5 % 35.9 % 31.9 % 36.2 %
>> src/backend/nodes/readfuncs.c 62.7 % 68.2 % 53.3 % 61.7 %
>> src/backend/optimizer/path/allpaths.c 93.0 % 93.4 % 100 % 100%
>> src/backend/optimizer/path/costsize.c 96.7 % 96.8 % 100% 100%
>> src/backend/optimizer/plan/createplan.c 89.9 % 91.2 % 95.0 % 96.0 %
>> src/backend/optimizer/plan/planner.c 95.1 % 95.2 % 97.3 % 97.3 %
>> src/backend/optimizer/plan/setrefs.c 94.7 % 94.7 % 97.1 % 97.1 %
>> src/backend/optimizer/plan/subselect.c 94.1 % 94.1% 100% 100%
>> src/backend/optimizer/util/pathnode.c 95.6 % 96.1 % 100% 100%
>> src/backend/utils/misc/guc.c 67.4 % 67.4 % 91.9 % 91.9 %
>>
>> On Wed, Feb 1, 2017 at 7:02 PM, Rushabh Lathia <rushabh(dot)lathia(at)gmail(dot)com>
>> wrote:
>>
>>> Due to recent below commit, patch not getting apply cleanly on
>>> master branch.
>>>
>>> commit d002f16c6ec38f76d1ee97367ba6af3000d441d0
>>> Author: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
>>> Date: Mon Jan 30 17:15:42 2017 -0500
>>>
>>> Add a regression test script dedicated to exercising system views.
>>>
>>> Please find attached latest patch.
>>>
>>>
>>>
>>> On Wed, Feb 1, 2017 at 5:55 PM, Rushabh Lathia <rushabh(dot)lathia(at)gmail(dot)com
>>> > wrote:
>>>
>>>> I am sorry for the delay, here is the latest re-based patch.
>>>>
>>>> my colleague Neha Sharma, reported one regression with the patch, where
>>>> explain output for the Sort node under GatherMerge was always showing
>>>> cost as zero:
>>>>
>>>> explain analyze select '' AS "xxx" from pgbench_accounts where filler
>>>> like '%foo%' order by aid;
>>>>
>>>> QUERY PLAN
>>>>
>>>> ------------------------------------------------------------
>>>> ------------------------------------------------------------
>>>> ------------------------
>>>> Gather Merge (cost=47169.81..70839.91 rows=197688 width=36) (actual
>>>> time=406.297..653.572 rows=200000 loops=1)
>>>> Workers Planned: 4
>>>> Workers Launched: 4
>>>> -> Sort (*cost=0.00..0.00 rows=0 width=0*) (actual
>>>> time=368.945..391.124 rows=40000 loops=5)
>>>> Sort Key: aid
>>>> Sort Method: quicksort Memory: 3423kB
>>>> -> Parallel Seq Scan on pgbench_accounts
>>>> (cost=0.00..42316.60 rows=49422 width=36) (actual time=296.612..338.873
>>>> rows=40000 loops=5)
>>>> Filter: (filler ~~ '%foo%'::text)
>>>> Rows Removed by Filter: 360000
>>>> Planning time: 0.184 ms
>>>> Execution time: 734.963 ms
>>>>
>>>> This patch also fix that issue.
>>>>
>>>>
>>>>
>>>>
>>>> On Wed, Feb 1, 2017 at 11:27 AM, Michael Paquier <
>>>> michael(dot)paquier(at)gmail(dot)com> wrote:
>>>>
>>>>> On Mon, Jan 23, 2017 at 6:51 PM, Kuntal Ghosh
>>>>> <kuntalghosh(dot)2007(at)gmail(dot)com> wrote:
>>>>> > On Wed, Jan 18, 2017 at 11:31 AM, Rushabh Lathia
>>>>> > <rushabh(dot)lathia(at)gmail(dot)com> wrote:
>>>>> >>
>>>>> > The patch needs a rebase after the commit 69f4b9c85f168ae006929eec4.
>>>>>
>>>>> Is an update going to be provided? I have moved this patch to next CF
>>>>> with "waiting on author" as status.
>>>>> --
>>>>> Michael
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Rushabh Lathia
>>>>
>>>
>>>
>>>
>>> --
>>> Rushabh Lathia
>>>
>>>
>>> --
>>> Sent via pgsql-hackers mailing list (pgsql-hackers(at)postgresql(dot)org)
>>> To make changes to your subscription:
>>> http://www.postgresql.org/mailpref/pgsql-hackers
>>>
>>>
>>
>>
>> --
>>
>> Regards,
>>
>> Neha Sharma
>>
>
>
>
> --
> Rushabh Lathia
>
--
Rushabh Lathia
Attachment | Content-Type | Size |
---|---|---|
without_gm.tar.gz | application/x-gzip | 133.7 KB |
with_gm.tar.gz | application/x-gzip | 133.7 KB |
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Haas | 2017-02-16 13:43:38 | Re: AT detach partition is broken |
Previous Message | Peter Eisentraut | 2017-02-16 13:21:17 | Re: drop support for Python 2.3 |