Re: Debugging shared memory issues on CentOS

From: Merlin Moncure <mmoncure(at)gmail(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Mack Talcott <mack(dot)talcott(at)gmail(dot)com>, postgres performance list <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Debugging shared memory issues on CentOS
Date: 2013-12-12 14:56:59
Message-ID: CAHyXU0wv+ONa_ou+0bH92EcmX2smfRyh_ikN-vLvXe87p6GiWw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On Wed, Dec 11, 2013 at 9:39 PM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> Mack Talcott <mack(dot)talcott(at)gmail(dot)com> writes:
>> The pattern I am seeing is that postgres processes keep growing in
>> shared (this makes sense as they access more of the shared memory, as
>> you've pointed out) but also process-specific memory as they run more
>> queries. The largest ones are using around 300mb of process-specific
>> memory, even when they're idle and outside of any transactions.
>
> There's quite a lot of stuff that a PG process will cache in local memory
> once it's acquired the info, for example:
> - relcache (relation descriptors)
> - catcache (system catalog entries)
> - compiled trees for plpgsql functions
>
> 300mb worth of that stuff seems on the high side, but perhaps you have
> lots and lots of tables, or lots and lots of functions?

This has got to be the problem. It's known that pathological
workloads (lots and lots of tables,views, and functions) abuse the
cache memory segment. There's no cap to cache memory so over time it
will just accumulate entries until there's nothing left to cache. For
most applications, this doesn't even show up on the radar. However,
300mb per postgres backend will burn through that 8gb pretty quickly.
It's tempting to say, "there should be a limit to backend local cache"
but it's not clear if the extra tracking is really worth it all things
considered. There was some discussion about this (see the archives).

Workarounds:
*) install connection pooler (as Tom noted), in particular pgbouncer.
For workloads like this you will want to be spartan on the number of
physical connections -- say, 1 * number of cores. For this option to
work you need to use transaction mode which in turn limits use of
session dependent features (advisory locks, NOTIFY, prepared
statements). Also if your client stack is java you need to take some
extra steps.
*) add memory
*) force connections to recycle every X period of time

merlin

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Tom Lane 2013-12-12 15:17:08 Re: Debugging shared memory issues on CentOS
Previous Message Janek Sendrowski 2013-12-12 13:16:26 Re: ORDER BY using index, tsearch2