From: | Thomas O'Connell <tfo(at)monsterlabs(dot)com> |
---|---|
To: | pgsql-admin(at)postgresql(dot)org |
Subject: | Re: performance tuning: shared_buffers, sort_mem; swap |
Date: | 2002-08-13 16:14:39 |
Message-ID: | tfo-225E19.11143913082002@news.hub.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
In article <200208131556(dot)g7DFuH008873(at)candle(dot)pha(dot)pa(dot)us>,
pgman(at)candle(dot)pha(dot)pa(dot)us (Bruce Momjian) wrote:
> Well, it doesn't really matter who is causing the swapping. If you have
> more of a load on your machine that RAM can hold, you are better off
> reducing your PostgreSQL shared buffers.
So the idea would be:
1. start with the numbers above.
2. benchmark postgres on the machine with those numbers set (creating
enough load to require plenty of resource use in shared_buffers/sort_mem)
3. monitor swap.
4. if heavy swapping occurs, reduce the amount of shared memory
allocated to shared_buffers/sort_mem.
right?
on sort of a side note, here's the situation i've got:
i'm currently running postgres on a couple of boxes with decent RAM and
processors. each postgres box, though, is also running several Apache
servers. the Apache servers are running web applications that hit
postgres, so when load on the box is high, it's caused by both Apache
and postgres.
we've had the issue before where postgres will die under heavy load
(meaning Apache is logging several requests per minute and stressing
postgres, too) with the error about how probably we don't have shared
memory configured appropriately.
is it possible to set the kernel resources and shared_buffers such that
postgres won't be the point of failure when trying to access more shared
memory than is currently available?
i guess the issue is: when kernel resources are maxed out, does
postgres' architecture mean that when an IPC call fails, it will be the
piece of the system to go down? e.g., if SHMALL/SHMMAX are configured to
allow 128MB shared memory on a box with 512MB RAM, plus a little extra
to provide for Apache, then if postgres is set to have 128MB shared
memory, is it a problem with our settings if postgres crashes when load
is high? meaning, could it be that Apache is using up the extra
SHMALL/SHMMAX and postgres doesn't really have 128MB of shared memory to
work with?
the trick, then, would seem to be to monitor swapping, but also to
monitor overall shared memory usage at the upper limits of available
resources.
sorry to ramble on. i'm just trying to get a high performance database
running in a stable environment... :)
-tfo
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2002-08-13 16:20:29 | Re: Leftover processes on shutdown - Debian+JDBC |
Previous Message | Tom Lane | 2002-08-13 16:13:57 | Re: pg_dump fails..does not like "text" data.. |