From: | Johannes Truschnigg <johannes(at)truschnigg(dot)info> |
---|---|
To: | Jean-Christophe Boggio <postgresql(at)thefreecat(dot)org> |
Cc: | pgsql-admin(at)lists(dot)postgresql(dot)org |
Subject: | Re: One PG process eating more than 40GB of RAM and getting killed by OOM |
Date: | 2023-10-13 13:13:28 |
Message-ID: | ZSlCeD80ThhvWp8a@vault.lan |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
You will want to try decreasing work_mem to a sane number first, without
looking at anything else really.
Check out the official docs:
https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-WORK-MEM
The gist is that work_mem is not a limit that's effective per
session/connection/query, but per sort- or hash-node, of which there can be
many in complex queries. Which is why 1GB of work_mem can end up consuming
several multiples of that, if you are (un)lucky enough.
--
with best regards:
- Johannes Truschnigg ( johannes(at)truschnigg(dot)info )
www: https://johannes.truschnigg.info/
phone: +436502133337
xmpp: johannes(at)truschnigg(dot)info
From | Date | Subject | |
---|---|---|---|
Next Message | Michael Banck | 2023-10-13 13:13:38 | Re: One PG process eating more than 40GB of RAM and getting killed by OOM |
Previous Message | MichaelDBA | 2023-10-13 13:12:27 | Re: One PG process eating more than 40GB of RAM and getting killed by OOM |