From: | David Rowley <dgrowleyml(at)gmail(dot)com> |
---|---|
To: | Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com> |
Cc: | PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
Subject: | Re: strange slow query - lost lot of time somewhere |
Date: | 2022-05-02 21:48:24 |
Message-ID: | CAApHDvpFsSJAThNLtqaWvA7axQd-VOFct=FYQN5muJV-sYtXjw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Mon, 2 May 2022 at 21:00, Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com> wrote:
> I found a query that is significantly slower with more memory
Can you clarify what you mean here? More memory was installed on the
machine? or work_mem was increased? or?
> plan 1 - fast https://explain.depesz.com/s/XM1f
>
> plan 2 - slow https://explain.depesz.com/s/2rBw
If it was work_mem you increased, it seems strange that the plan would
switch over to using a Nested Loop / Memoize plan. Only 91 rows are
estimated on the outer side of the join. It's hard to imagine that
work_mem was so low that the Memoize costing code thought there would
ever be cache evictions.
> Strange - the time of last row is +/- same, but execution time is 10x worse
>
> It looks like slow environment cleaning
Can you also show EXPLAIN for the Memoize plan without ANALYZE?
Does the slowness present every time that plan is executed?
Can you show the EXPLAIN ANALYZE of the nested loop plan with
enable_memoize = off? You may ned to disable hash and merge join.
David
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2022-05-02 23:02:07 | Re: strange slow query - lost lot of time somewhere |
Previous Message | Tom Lane | 2022-05-02 21:24:51 | Re: fix cost subqueryscan wrong parallel cost |