From: | David Rowley <dgrowleyml(at)gmail(dot)com> |
---|---|
To: | Justin Pryzby <pryzby(at)telsasoft(dot)com> |
Cc: | Tomas Vondra <tomas(dot)vondra(at)enterprisedb(dot)com>, Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com>, Yura Sokolov <y(dot)sokolov(at)postgrespro(dot)ru>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
Subject: | Re: plan with result cache is very slow when work_mem is not enough |
Date: | 2021-05-08 03:26:57 |
Message-ID: | CAApHDvqn17xP4yVkoYsTNu4X=1srN62z_MK5Wfca-Of1h_-Ycw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Sat, 8 May 2021 at 14:43, Justin Pryzby <pryzby(at)telsasoft(dot)com> wrote:
> You could put this into a separate function called by ExecEndResultCache().
> Then anyone that breaks the memory accounting can also call the function in the
> places they changed to help figure out what they broke.
I almost did it that way and left a call to it in remove_cache_entry()
#ifdef'd out. However, as mentioned, I'm more concerned about the
accounting being broken and left broken than I am with making it take
a little less time to find the exact place to fix the breakage. If
the breakage was to occur when adding a new entry to the cache then it
might not narrow it down much if we just give users an easy way to
check the memory accounting during only evictions. The only way to
highlight the problem as soon as it occurs would be to validate the
memory tracking every time the mem_used field is changed. I think that
would be overkill.
I also find it hard to imagine what other reasons we'll have in the
future to adjust 'mem_used'. At the moment there are 4 places. Two
that add bytes and two that subtract bytes. They're all hidden inside
reusable functions that are in charge of adding and removing entries
from the cache.
David
From | Date | Subject | |
---|---|---|---|
Next Message | Noah Misch | 2021-05-08 03:30:44 | Re: Anti-critical-section assertion failure in mcxt.c reached by walsender |
Previous Message | Japin Li | 2021-05-08 03:17:13 | Re: [BUG] Logical replication failure "ERROR: could not map filenode "base/13237/442428" to relation OID" with catalog modifying txns |