From: | Stephen Frost <sfrost(at)snowman(dot)net> |
---|---|
To: | David Rowley <dgrowleyml(at)gmail(dot)com> |
Cc: | reid(dot)thompson(at)crunchydata(dot)com, Thomas Munro <thomas(dot)munro(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
Subject: | Re: Add the ability to limit the amount of memory that can be allocated to backends. |
Date: | 2022-09-09 16:48:56 |
Message-ID: | 20220909164856.GJ26002@tamriel.snowman.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Greetings,
* David Rowley (dgrowleyml(at)gmail(dot)com) wrote:
> On Thu, 1 Sept 2022 at 04:52, Reid Thompson
> <reid(dot)thompson(at)crunchydata(dot)com> wrote:
> > Add the ability to limit the amount of memory that can be allocated to
> > backends.
>
> Are you aware that relcache entries are stored in backend local memory
> and that once we've added a relcache entry for a relation that we have
> no current code which attempts to reduce the memory consumption used
> by cache entries when there's memory pressure?
Short answer to this is yes, and that's an issue, but it isn't this
patch's problem to deal with- that's an issue that the relcache system
needs to be changed to address.
> It seems to me that if we had this feature as you propose that a
> backend could hit the limit and stay there just from the memory
> requirements of the relation cache after some number of tables have
> been accessed from the given backend. It's not hard to imagine a
> situation where the palloc() would start to fail during parse, which
> might make it quite infuriating for anyone trying to do something
> like:
Agreed that this could happen but I don't imagine it to be super likely-
and even if it does, this is probably a better position to be in as the
backend could then be disconnected from and would then go away and its
memory free'd, unlike the current OOM-killer situation where we crash
and go through recovery. We should note this in the documentation
though, sure, so that administrators understand how this can occur and
can take action to address it.
> I think a better solution to this problem would be to have "memory
> grants", where we configure some amount of "pool" memory that backends
> are allowed to use for queries. The planner would have to add the
> expected number of work_mem that the given query is expected to use
> and before that query starts, the executor would have to "checkout"
> that amount of memory from the pool and return it when finished. If
> there is not enough memory in the pool then the query would have to
> wait until enough memory is available. This creates a deadlocking
> hazard that the deadlock detector would need to be made aware of.
Sure, that also sounds great and a query acceptance system would be
wonderful. If someone is working on that with an expectation of it
landing before v16, great. Otherwise, I don't see it as relevant to
the question about if we should include this feature or not, and I'm not
even sure that we'd refuse this feature even if we already had an
acceptance system as a stop-gap should we guess wrong and not realize it
until it's too late.
Thanks,
Stephen
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2022-09-09 16:56:41 | Re: is_superuser is not documented |
Previous Message | Tom Lane | 2022-09-09 16:48:38 | Re: configure --with-uuid=bsd fails on NetBSD |