From: | Stephen Frost <sfrost(at)snowman(dot)net> |
---|---|
To: | Jan Wieck <jan(at)wi3ck(dot)info> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Postgres hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
Subject: | Re: Limiting memory allocation |
Date: | 2022-05-18 16:43:19 |
Message-ID: | 20220518164319.GJ9030@tamriel.snowman.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Greetings,
* Jan Wieck (jan(at)wi3ck(dot)info) wrote:
> On 5/17/22 18:30, Stephen Frost wrote:
> >This isn’t actually a solution though and that’s the problem- you end up
> >using swap but if you use more than “expected” the OOM killer comes in and
> >happily blows you up anyway. Cgroups are containers and exactly what kube
> >is doing.
>
> Maybe I'm missing something, but what is it that you would actually consider
> a solution? Knowing your current memory consumption doesn't make the need
> for allocating some right now go away. What do you envision the response of
> PostgreSQL to be if we had that information about resource pressure? I don't
> see us using mallopt(3) or malloc_trim(3) anywhere in the code, so I don't
> think any of our processes give back unused heap at this point (please
> correct me if I'm wrong). This means that even if we knew about the memory
> pressure of the system, adjusting things like work_mem on the fly may not do
> much at all, unless there is a constant turnover of backends.
>
> So what do you propose PostgreSQL's response to high memory pressure to be?
Fail the allocation, just how most PG systems are set up to do. In such
a case, PG will almost always be able to fail the transaction, free up
the memory used, and continue running *without* ending up with a crash.
Thanks,
Stephen
From | Date | Subject | |
---|---|---|---|
Next Message | Justin Pryzby | 2022-05-18 16:57:15 | Re: support for MERGE |
Previous Message | Alvaro Herrera | 2022-05-18 16:42:38 | Re: support for MERGE |