From: | Reid Thompson <reid(dot)thompson(at)crunchydata(dot)com> |
---|---|
To: | pgsql-hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
Cc: | reid(dot)thompson(at)crunchydata(dot)com |
Subject: | Re: Add the ability to limit the amount of memory that can be allocated to backends. |
Date: | 2022-09-07 00:25:06 |
Message-ID: | 36f05c1e1c8b26ce92855f7fbea3e56d5ae15899.camel@crunchydata.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Fri, 2022-09-02 at 09:30 +0200, Drouvot, Bertrand wrote:
> Hi,
>
> I'm not sure we are choosing the right victims here (aka the ones
> that are doing the request that will push the total over the limit).
>
> Imagine an extreme case where a single backend consumes say 99% of
> the limit, shouldn't it be the one to be "punished"? (and somehow forced
> to give the memory back).
>
> The problem that i see with the current approach is that a "bad"
> backend could impact all the others and continue to do so.
>
> what about punishing say the highest consumer , what do you think?
> (just speaking about the general idea here, not about the implementation)
Initially, we believe that punishing the detector is reasonable if we
can help administrators avoid the OOM killer/resource starvation. But
we can and should expand on this idea.
Another thought is, rather than just failing the query/transaction we
have the affected backend do a clean exit, freeing all it's resources.
--
Reid Thompson
Senior Software Engineer
Crunchy Data, Inc.
reid(dot)thompson(at)crunchydata(dot)com
www.crunchydata.com
From | Date | Subject | |
---|---|---|---|
Next Message | Jeff Davis | 2022-09-07 01:27:32 | Re: pg_auth_members.grantor is bunk |
Previous Message | Reid Thompson | 2022-09-07 00:17:25 | Re: Add the ability to limit the amount of memory that can be allocated to backends. |