From: | Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com> |
---|---|
To: | vignesh C <vignesh21(at)gmail(dot)com> |
Cc: | "Hayato Kuroda (Fujitsu)" <kuroda(dot)hayato(at)fujitsu(dot)com>, Shubham Khanna <khannashubham1197(at)gmail(dot)com>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, Dilip Kumar <dilipbalaut(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
Subject: | Re: Improve eviction algorithm in ReorderBuffer |
Date: | 2024-02-26 07:02:26 |
Message-ID: | CAD21AoCtfDGW9yf3Aj=8c=Ey-5NXvCd48df8WhAO=zu466DPEQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Fri, Feb 23, 2024 at 6:24 PM vignesh C <vignesh21(at)gmail(dot)com> wrote:
>
> On Fri, 9 Feb 2024 at 20:51, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com> wrote:
> >
> >
> > I think this performance regression is not acceptable. In this
> > workload, one transaction has 10k subtransactions and the logical
> > decoding becomes quite slow if logical_decoding_work_mem is not big
> > enough. Therefore, it's a legitimate and common approach to increase
> > logical_decoding_work_mem to speedup the decoding. However, with thie
> > patch, the decoding becomes slower than today. It's a bad idea in
> > general to optimize an extreme case while sacrificing the normal (or
> > more common) cases.
> >
>
> Since this same function is used by pg_dump sorting TopoSort functions
> also, we can just verify once if there is no performance impact with
> large number of objects during dump sorting:
Okay. I've run the pg_dump regression tests with --timer flag (note
that pg_dump doesn't use indexed binary heap):
master:
[16:00:25] t/001_basic.pl ................ ok 151 ms ( 0.00 usr
0.00 sys + 0.09 cusr 0.06 csys = 0.15 CPU)
[16:00:25] t/002_pg_dump.pl .............. ok 10157 ms ( 0.23 usr
0.01 sys + 1.48 cusr 0.37 csys = 2.09 CPU)
[16:00:36] t/003_pg_dump_with_server.pl .. ok 504 ms ( 0.00 usr
0.01 sys + 0.10 cusr 0.07 csys = 0.18 CPU)
[16:00:36] t/004_pg_dump_parallel.pl ..... ok 1044 ms ( 0.00 usr
0.00 sys + 0.12 cusr 0.08 csys = 0.20 CPU)
[16:00:37] t/005_pg_dump_filterfile.pl ... ok 2390 ms ( 0.00 usr
0.00 sys + 0.34 cusr 0.19 csys = 0.53 CPU)
[16:00:40] t/010_dump_connstr.pl ......... ok 4813 ms ( 0.01 usr
0.00 sys + 2.13 cusr 0.45 csys = 2.59 CPU)
patched:
[15:59:47] t/001_basic.pl ................ ok 150 ms ( 0.00 usr
0.00 sys + 0.08 cusr 0.07 csys = 0.15 CPU)
[15:59:47] t/002_pg_dump.pl .............. ok 10057 ms ( 0.23 usr
0.02 sys + 1.49 cusr 0.36 csys = 2.10 CPU)
[15:59:57] t/003_pg_dump_with_server.pl .. ok 509 ms ( 0.00 usr
0.00 sys + 0.09 cusr 0.08 csys = 0.17 CPU)
[15:59:58] t/004_pg_dump_parallel.pl ..... ok 1048 ms ( 0.01 usr
0.00 sys + 0.11 cusr 0.11 csys = 0.23 CPU)
[15:59:59] t/005_pg_dump_filterfile.pl ... ok 2398 ms ( 0.00 usr
0.00 sys + 0.34 cusr 0.20 csys = 0.54 CPU)
[16:00:01] t/010_dump_connstr.pl ......... ok 4762 ms ( 0.01 usr
0.00 sys + 2.15 cusr 0.42 csys = 2.58 CPU)
There is no noticeable difference between the two results.
Regards,
--
Masahiko Sawada
Amazon Web Services: https://aws.amazon.com
From | Date | Subject | |
---|---|---|---|
Next Message | Peter Smith | 2024-02-26 07:07:32 | Re: Add publisher and subscriber to glossary documentation. |
Previous Message | Shlok Kyal | 2024-02-26 06:55:42 | Re: Add publisher and subscriber to glossary documentation. |