Re: Postgres memory question

From: Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com>
To: Kobus Wolvaardt <kobuswolf(at)gmail(dot)com>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: Postgres memory question
Date: 2009-08-09 16:55:28
Message-ID: dcc563d10908090955offb777el836605b070068074@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Sun, Aug 9, 2009 at 4:06 AM, Kobus Wolvaardt<kobuswolf(at)gmail(dot)com> wrote:
> Hi,
>
> We have software deployed on our network that need postgres, we have server
> that hosts the server and all worked fine until we crossed about 200 users.
> The application is written so that it makes a connection right at the start
> and keeps it alive for the duration of the app. The app is written in
> Delphi. The postgres server runs on a windows 2008 server with quad core cpu
> and 4 GB of ram.

Is this an app you can fix yourself, or are you stuck with this
mis-step in design?

> The problem after +-200 connections is that the server runs out of memory,
> but most of these connections are idle... it only gets used every 20 minutes
> to capture a transaction.
>
> It looks like every idle connection uses about 10MB of ram which sees high,
> but I cannot find a config option to limit it.
>
> I tried pgbouncer to do connection pooling, but for each connection to
> pgbouncer one connection is made to the server which results in exactly the
> same amount of connection. If I run it in transaction pooling mode it works
> for simple queries, but something goes lost says the programmer (views that
> were setup or something).

Are each of these connections quite different from each other or
something? I'm not that familiar with pgbouncer so I don't know if
this behaviour is normal. Can you get by with pgpool for this? Does
it work any better?

> Any help or pointers would be nice, either on how to make usage less, or on
> how to get pooling to work.
>
> P.S. We are growing the users by another 20% soon and the will result in
> massive issues. I don't mind slower operation for now, I just need to keep
> it working.

If another pooling solution won't fix this, then you need more memory
and a bigger server. pg on windows is 32 bit so you might have some
problems running it well on a larger windows machine, if that's the
case, then it would likely help if you could run this on 64 bit linux
with 8+Gigs of ram. This solution would allow you to grow to several
hundred more connections before you'd have issues. Also, performance
might be better on linux with this many connections, but I have not
empirical evidence to support that belief.

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Magnus Hagander 2009-08-09 19:10:48 Re: Re: PG fails on Windows Server 2008: could not reattach to shared memory ... : 487
Previous Message Scott Ribe 2009-08-09 16:16:34 Re: libpq