Re: performance tuning: shared_buffers, sort_mem; swap

From: Bruce Momjian <pgman(at)candle(dot)pha(dot)pa(dot)us>
To: "Thomas O'Connell" <tfo(at)monsterlabs(dot)com>
Cc: pgsql-admin(at)postgresql(dot)org
Subject: Re: performance tuning: shared_buffers, sort_mem; swap
Date: 2002-08-13 16:27:55
Message-ID: 200208131627.g7DGRtJ10678@candle.pha.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

Thomas O'Connell wrote:
> In article <200208131556(dot)g7DFuH008873(at)candle(dot)pha(dot)pa(dot)us>,
> pgman(at)candle(dot)pha(dot)pa(dot)us (Bruce Momjian) wrote:
>
> > Well, it doesn't really matter who is causing the swapping. If you have
> > more of a load on your machine that RAM can hold, you are better off
> > reducing your PostgreSQL shared buffers.
>
> So the idea would be:
>
> 1. start with the numbers above.
> 2. benchmark postgres on the machine with those numbers set (creating
> enough load to require plenty of resource use in shared_buffers/sort_mem)
> 3. monitor swap.
> 4. if heavy swapping occurs, reduce the amount of shared memory
> allocated to shared_buffers/sort_mem.
>
> right?
Yes.

> on sort of a side note, here's the situation i've got:
>
> i'm currently running postgres on a couple of boxes with decent RAM and
> processors. each postgres box, though, is also running several Apache
> servers. the Apache servers are running web applications that hit
> postgres, so when load on the box is high, it's caused by both Apache
> and postgres.
>
> we've had the issue before where postgres will die under heavy load
> (meaning Apache is logging several requests per minute and stressing
> postgres, too) with the error about how probably we don't have shared
> memory configured appropriately.
>
> is it possible to set the kernel resources and shared_buffers such that
> postgres won't be the point of failure when trying to access more shared
> memory than is currently available?
>
> i guess the issue is: when kernel resources are maxed out, does
> postgres' architecture mean that when an IPC call fails, it will be the
> piece of the system to go down? e.g., if SHMALL/SHMMAX are configured to
> allow 128MB shared memory on a box with 512MB RAM, plus a little extra
> to provide for Apache, then if postgres is set to have 128MB shared
> memory, is it a problem with our settings if postgres crashes when load
> is high? meaning, could it be that Apache is using up the extra
> SHMALL/SHMMAX and postgres doesn't really have 128MB of shared memory to
> work with?
>
> the trick, then, would seem to be to monitor swapping, but also to
> monitor overall shared memory usage at the upper limits of available
> resources.

Assuming you are running 7.2.X, PostgreSQL grabs all its resources on
start and doesn't grab anything additional. There is some Linux kernel
code that starts to kill processes when resources get low, and that
perhaps is what you are seeing.

--
Bruce Momjian | http://candle.pha.pa.us
pgman(at)candle(dot)pha(dot)pa(dot)us | (610) 359-1001
+ If your life is a hard drive, | 13 Roberts Road
+ Christ can be your backup. | Newtown Square, Pennsylvania 19073

In response to

Browse pgsql-admin by date

  From Date Subject
Next Message Robert M. Meyer 2002-08-13 16:36:22 Re: Leftover processes on shutdown - Debian+JDBC
Previous Message Nick Fankhauser 2002-08-13 16:20:48 Re: Leftover processes on shutdown - Debian+JDBC