Re: Large PostgreSQL servers

From: Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com>
To: Merlin Moncure <mmoncure(at)gmail(dot)com>
Cc: Kjetil Nygård <polpot78(at)gmail(dot)com>, pgsql-general(at)postgresql(dot)org
Subject: Re: Large PostgreSQL servers
Date: 2012-03-22 15:02:27
Message-ID: CAOR=d=1f9c-dCMPk99UYA78nF1yvDF3wZ=bcLh-xAEqwWQVgig@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Thu, Mar 22, 2012 at 8:46 AM, Merlin Moncure <mmoncure(at)gmail(dot)com> wrote:
> large result sets) or cached structures like plpgsql plans.  Once you
> go over 50% memory into shared, it's pretty easy to overcommit your
> server and burn yourself.  Of course, 50% of 256GB server is a very
> different animal than 50% of a 4GB server.

There's other issues you run into with large shared_buffers as well.
If you've got a large shared_buffers setting, but only regularly hit a
small subset of your db (say 32GB shared_buffers but only hit 4G or so
regularly in your app) then it's quite possible that older
shared_buffer segments will get swapped out because they're not being
used. Then, when the db goes to hit a page in shared_buffers, the OS
will have to swap it back in. What was supposed to make your db much
faster has now made it much slower.

With Linux, the OS tends to swap out unused memory to make room for
file buffers. While you can change the swappiness settings to 0 to
slow it down, the OS will eventually swap out the least used segments
anyway. The only solution on large memory servers is often to just
turn off swap.

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Merlin Moncure 2012-03-22 15:29:46 Re: Large PostgreSQL servers
Previous Message Tom Lane 2012-03-22 14:48:16 Re: Very high memory usage on restoring dump (with plain pqsl) on pg 9.1.2