From: | Justin Pryzby <pryzby(at)telsasoft(dot)com> |
---|---|
To: | Joseph D Wagner <joe(at)josephdwagner(dot)info> |
Cc: | pgsql-hackers(at)lists(dot)postgresql(dot)org |
Subject: | Re: proposal: Allocate work_mem From Pool |
Date: | 2022-07-11 04:39:30 |
Message-ID: | 20220711043930.GR13040@telsasoft.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Sun, Jul 10, 2022 at 08:45:38PM -0700, Joseph D Wagner wrote:
> However, that's risky because it's 3GB per operation, not per
> query/connection; it could easily spiral out of control.
This is a well-known deficiency.
I suggest to dig up the old threads to look into.
It's also useful to include links to the prior discussion.
> I think it would be better if work_mem was allocated from a pool of memory
I think this has been proposed before, and the issue/objection with this idea
is probably that query plans will be inconsistent, and end up being
sub-optimal.
work_mem is considered at planning time, but I think you only consider its
application execution. A query that was planned with the configured work_mem
but can't obtain the expected amount at execution time might perform poorly.
Maybe it should be replanned with lower work_mem, but that would lose the
arms-length relationship between the planner-executor.
Should an expensive query wait a bit to try to get more work_mem?
What do you do if 3 expensive queries are all waiting ?
--
Justin
From | Date | Subject | |
---|---|---|---|
Next Message | Thomas Munro | 2022-07-11 04:50:25 | Re: pgsql: dshash: Add sequential scan support. |
Previous Message | John Naylor | 2022-07-11 04:32:51 | Re: [PATCH] Optimize json_lex_string by batching character copying |