From: | Tomas Vondra <tomas(dot)vondra(at)enterprisedb(dot)com> |
---|---|
To: | Stephen Frost <sfrost(at)snowman(dot)net>, Oleksii Kliukin <alexk(at)hintbits(dot)com> |
Cc: | Álvaro Herrera <alvherre(at)alvh(dot)no-ip(dot)org>, Jan Wieck <jan(at)wi3ck(dot)info>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Postgres hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
Subject: | Re: Limiting memory allocation |
Date: | 2022-05-20 23:08:49 |
Message-ID: | 76b31a7e-2b5d-c361-a79a-05b8c00378b9@enterprisedb.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 5/20/22 21:50, Stephen Frost wrote:
> Greetings,
>
> ...
>
>>> How exactly this would work is unclear to me; maybe one
>>> process keeps an eye on it in an OS-specific manner,
>
> There seems to be a lot of focus on trying to implement this as "get the
> amount of free memory from the OS and make sure we don't go over that
> limit" and that adds a lot of OS-specific logic which complicates things
> and also ignores the use-cases where an admin wishes to limit PG's
> memory usage due to other processes running on the same system. I'll
> point out that the LD_PRELOAD library doesn't even attempt to do this,
> even though it's explicitly for Linux, but uses an environment variable
> instead.
>
> In PG, we'd have that be a GUC that an admin is able to set and then we
> track the memory usage (perhaps per-process, perhaps using some set of
> buckets, perhaps locally per-process and then in a smaller number of
> buckets in shared memory, or something else) and fail an allocation when
> it would go over that limit, perhaps only when it's a regular user
> backend or with other conditions around it.
>
I agree a GUC setting a memory target is a sensible starting point.
I wonder if we might eventually use this to define memory budgets. One
of the common questions I get is how do you restrict the user from
setting work_mem too high or doing too much memory-hungry things.
Currently there's no way to do that, because we have no way to limit
work_mem values, and even if we had the user could construct a more
complex query with more memory-hungry operations.
But I think it's also that we weren't sure what to do after hitting a
limit - should we try replanning the query with lower work_mem value, or
what?
However, if just failing the malloc() is acceptable, maybe we could use
this to achieve something like this?
>> What would be useful is a way for Postgres to count the amount of memory
>> allocated by each backend. This could be advantageous for giving per-backend
>> memory usage to the user, as well as for enforcing a limit on the total amount
>> of memory allocated by the backends.
>
> I agree that this would be independently useful.
>
Well, we already have the memory-accounting built into the memory
context infrastructure. It kinda does the same thing as the malloc()
wrapper, except that it does not publish the information anywhere and
it's per-context (so we have to walk the context recursively).
So maybe we could make this on-request somehow? Say, we'd a signal to
the process, and it'd run MemoryContextMemAllocated() on the top memory
context and store the result somewhere.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2022-05-21 00:20:11 | Re: allow building trusted languages without the untrusted versions |
Previous Message | Nathan Bossart | 2022-05-20 23:08:37 | Re: docs: mention "pg_read_all_stats" in "track_activities" description |