From: | John R Pierce <pierce(at)hogranch(dot)com> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Understanding Postgres Memory Usage |
Date: | 2016-08-25 20:48:24 |
Message-ID: | 257030f3-b9ed-1ad7-ebef-189c88608e41@hogranch.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On 8/25/2016 9:58 AM, Theron Luhn wrote:
> > I do not remember exact formula, but it should be something like
> “work_mem*max_connections + shared_buffers” and it should be around
> 80% of your machine RAM (minus RAM used by other processes and
> kernel). It will save you from OOM.
>
a single query can use multiple work_mem's if its got subqueries, joins,
etc.
> My Postgres is configured with *very* conservative values. work_mem
> (4MB) * max_connections (100) + shared buffers (512MB) = ~1GB, yet
> Postgres managed to fill up a 4GB server. I'm seeing workers
> consuming hundreds of MBs of memory (and not releasing any of it until
> the connection closes), despite work_mem being 4MB.
are you doing queries that return large data sets?
--
john r pierce, recycling bits in santa cruz
From | Date | Subject | |
---|---|---|---|
Next Message | Alex Lai | 2016-08-25 20:59:04 | Re: Unable to log in current local time EST |
Previous Message | Oleg Bartunov | 2016-08-25 20:30:11 | Fwd: [Snowball-discuss] Greek stemmer |