From: | Vladimir Sitnikov <sitnikov(dot)vladimir(at)gmail(dot)com> |
---|---|
To: | PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
Subject: | max_memory_per_backend GUC to limit backend's memory usage |
Date: | 2018-03-23 15:58:55 |
Message-ID: | CAB=Je-FOuN4Z0itYxxMz3=RZi0BO8ZaJznB6eOfMQU9_TKhaMQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Hi,
I've got a problem with PostgreSQL 9.6.5: backend gets killed by OOM
killer, and it shuts the DB down.
Of course, the OOM case is to be investigated (MemoryContextStatsDetail,
etc), however I wonder if DB can be more robust.
The sad thing is a single backend crash results in the DB shutdown, so it
interrupts lots of transactions.
I wonder if a GUC can be implemented, so it could fail just a single
backend by limiting its memory use.
For instance: max_mememory_per_backend=100MiB.
The idea is to increase stability by limiting each process. Of course it
would result in "out of memory" in case a single query requires 100500MiB
(e.g. it misunderestimates the hash join). As far as I understand, it
should be safer to terminate just one bad backend rather than kill all the
processes.
I did some research, and I have not found the discussion of this idea.
Vladimir Rusinov> FWIW, lack of per-connection and/or global memory limit
for work_mem is major PITA
>
https://www.postgresql.org/message-id/CAE1wr-ykMDUFMjucDGqU-s98ARk3oiCfhxrHkajnb3f%3DUp70JA%40mail.gmail.com
Vladimir
From | Date | Subject | |
---|---|---|---|
Next Message | Andres Freund | 2018-03-23 16:06:53 | Re: max_memory_per_backend GUC to limit backend's memory usage |
Previous Message | Simon Riggs | 2018-03-23 15:54:00 | Re: [HACKERS] Surjective functional indexes |