From: | Gregory Stark <stark(at)enterprisedb(dot)com> |
---|---|
To: | "Richard Yen" <dba(at)richyen(dot)com> |
Cc: | <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: postgres memory management issues? |
Date: | 2007-09-07 09:31:34 |
Message-ID: | 874pi6zurt.fsf@oxford.xeocode.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
"Richard Yen" <dba(at)richyen(dot)com> writes:
> My understanding is that if any one postgres process's memory usage, plus the
> shared memory, exceeds the kernel limit of 4GB, then the kernel will kill the
> process off. Is this true? If so, would postgres have some prevention
> mechanism that would keep a particular process from getting too big? (Maybe
> I'm being too idealistic, or I just simply don't understand how postgres works
> under the hood)
I don't think you have an individual process going over 4G.
I think what you have is 600 processes which in aggregate are using more
memory than you have available. Do you really need 600 processes by the way?
You could try lowering work_mem but actually your value seems fairly
reasonable. Perhaps your kernel isn't actually able to use 16GB? What does cat
/proc/meminfo say? What does it say when this is happening?
You might also tweak /proc/sys/vm/overcommit_memory but I don't remember what
the values are, you can search to find them.
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
From | Date | Subject | |
---|---|---|---|
Next Message | Maila Fatticcioni | 2007-09-07 09:37:40 | DRBD and Postgres: how to improve the perfomance? |
Previous Message | Claus Guttesen | 2007-09-07 09:01:47 | Re: postgres memory management issues? |