From: | Luca Ferrari <fluca1978(at)gmail(dot)com> |
---|---|
To: | Alexander Pyhalov <alp(at)sfedu(dot)ru> |
Cc: | "pgsql-general(at)lists(dot)postgresql(dot)org" <pgsql-general(at)lists(dot)postgresql(dot)org> |
Subject: | Re: PostgreSQL memory usage |
Date: | 2019-10-17 06:51:42 |
Message-ID: | CAKoxK+67NdDkko2LyJ97ik15K+4FnpnEU_B_etXisvyfXDTvrQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Wed, Oct 16, 2019 at 6:30 PM Alexander Pyhalov <alp(at)sfedu(dot)ru> wrote:
> I see that at some point several postgresql backends start consuming about 16 GB RAM. If we account for shared_buffers, it meens 4 GB RAM for private backend memory. How can we achieve such numbers? I don't see any long-running (or complex) queries (however, there could be long-running transactions and queries to large partitioned tables). But how could they consume 512* work_mem memory?
I'm not sure they ae consuming 512 times the work_memory, I mean there
is a whole lot of stuff a process can allocate, and it requires to dig
into the process memory map (something I'm not good at!) to understand
it.
For sure, a single process (backend) can consume one time work_memory
per "complex node" in a query plan, that is it can consume multiple
times the work_memory value if that is available.
Luca
From | Date | Subject | |
---|---|---|---|
Next Message | Morris de Oryx | 2019-10-17 06:52:32 | Has there been any discussion of custom dictionaries being defined in the database? |
Previous Message | Erwin Brandstetter | 2019-10-17 03:09:17 | Can functions containing a CTE be PARALLEL SAFE? |