From: | "Scott Marlowe" <scott(dot)marlowe(at)gmail(dot)com> |
---|---|
To: | "Alex Adriaanse" <alex(at)innovacomputing(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Out of memory |
Date: | 2008-03-28 20:31:01 |
Message-ID: | dcc563d10803281331u2d31d35j3c944d2e98561c64@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Fri, Mar 28, 2008 at 12:38 PM, Alex Adriaanse
<alex(at)innovacomputing(dot)com> wrote:
> I have a client that experienced several Out Of Memory errors a few
> weeks ago (March 10 & 11), and I'd like to figure out the cause. In the
> logs it's showing that they were getting out of memory errors for about
> 0.5-1 hour, after which one of the processes would crash and take the
> whole database down. After they restarted the server it would
> eventually start giving out of memory messages and crash again. This
> happened a total of five times over a 24 hour period. After that we did
> not see these errors again. They did upgrade to 8.1.11 on the 14th, and
> have also moved some of the databases to different servers afterwards.
>
> First some background information:
>
> Software (at the time of the memory errors): CentOS 4.5 (x86_64) running
> its 2.6.9-55.ELsmp Linux kernel, PostgreSQL 8.1.9 (from RPMs provided on
> the PostgreSQL web site: postgresql-8.1.9-1PGDG.x86_64).
>
> Hardware: 4 dual-core Opterons. 16GB physical RAM, 2GB swap.
>
> Database: they use persistent connections, and usually have around 1000
> open database connections. The vast majority of those are usually
> idle. They do run a lot of queries though. The total size of the
> databases in this cluster is 36GB, with the largest database being 21GB,
> and the largest table being 2.5GB (having 20 million tuples).
>
> Highlights of postgresql.conf settings:
> max_connections = 2000
> shared_buffers = 120000
> work_mem = 4096
SNIP
Just because you can set max_connections to 2000 doesn't mean it's a
good idea. If your client needs 1000 persistent connections, then put
a connection pooler between your app (I'm guessing php since it
operates this way) and the database.
Running 1000 connections is a LOT, and you need 1000 active
connections, then you're likely gonna need a bigger machine than one
with 8 cores and 16 gig of rams. OTOH, if you are actively servicing
less than 10% of those connections at a time, then you're wasting
memory on the number of backends that are started up and doing
nothing. each one consumes some amount of memory on its own, usually
in the 5 to 10 meg range, just to sit there and do nothing.
Plus you've got issues with thundering herd type situations that can
show up as you increase connections.
Pooling is the answer here.
From | Date | Subject | |
---|---|---|---|
Next Message | sam | 2008-03-28 20:31:58 | Re: Performance of update |
Previous Message | Sam Mason | 2008-03-28 20:16:03 | Re: dunction issue |