Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions

From: Dilip Kumar <dilipbalaut(at)gmail(dot)com>
To: Kuntal Ghosh <kuntalghosh(dot)2007(at)gmail(dot)com>
Cc: vignesh C <vignesh21(at)gmail(dot)com>, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, a(dot)kondratov(at)postgrespro(dot)ru, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions
Date: 2019-11-04 10:02:33
Message-ID: CAFiTN-uvs1jO4QTvnR-UDxZHuE47-tSLc-BRzayjzyA0Zyz0pg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Mon, Nov 4, 2019 at 2:43 PM Kuntal Ghosh <kuntalghosh(dot)2007(at)gmail(dot)com> wrote:
>
> Hello hackers,
>
> I've done some performance testing of this feature. Following is my
> test case (taken from an earlier thread):
>
> postgres=# CREATE TABLE large_test (num1 bigint, num2 double
> precision, num3 double precision);
> postgres=# \timing on
> postgres=# EXPLAIN (ANALYZE, BUFFERS) INSERT INTO large_test (num1,
> num2, num3) SELECT round(random()*10), random(), random()*142 FROM
> generate_series(1, 1000000) s(i);
>
> I've kept the publisher and subscriber in two different system.
>
> HEAD:
> With 1000000 tuples,
> Execution Time: 2576.821 ms, Time: 9.632.158 ms (00:09.632), Spill count: 245
> With 10000000 tuples (10 times more),
> Execution Time: 30359.509 ms, Time: 95261.024 ms (01:35.261), Spill count: 2442
>
> With the memory accounting patch, following are the performance results:
> With 100000 tuples,
> logical_decoding_work_mem=64kB, Execution Time: 2414.371 ms, Time:
> 9648.223 ms (00:09.648), Spill count: 2315
> logical_decoding_work_mem=64MB, Execution Time: 2477.830 ms, Time:
> 9895.161 ms (00:09.895), Spill count 3
> With 1000000 tuples (10 times more),
> logical_decoding_work_mem=64kB, Execution Time: 38259.227 ms, Time:
> 105761.978 ms (01:45.762), Spill count: 23149
> logical_decoding_work_mem=64MB, Execution Time: 24624.639 ms, Time:
> 89985.342 ms (01:29.985), Spill count: 23
>
> With logical decoding of in-progress transactions patch and with
> streaming on, following are the performance results:
> With 100000 tuples,
> logical_decoding_work_mem=64kB, Execution Time: 2674.034 ms, Time:
> 20779.601 ms (00:20.780)
> logical_decoding_work_mem=64MB, Execution Time: 2062.404 ms, Time:
> 9559.953 ms (00:09.560)
> With 1000000 tuples (10 times more),
> logical_decoding_work_mem=64kB, Execution Time: 26949.588 ms, Time:
> 196261.892 ms (03:16.262)
> logical_decoding_work_mem=64MB, Execution Time: 27084.403 ms, Time:
> 90079.286 ms (01:30.079)
So your result shows that with "streaming on", performance is
degrading? By any chance did you try to see where is the bottleneck?

--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Kuntal Ghosh 2019-11-04 10:05:12 Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions
Previous Message Josef Šimánek 2019-11-04 09:35:25 Re: [PATCH] Include triggers in EXPLAIN