Re: large dataset with write vs read clients

From: Stephen Frost <sfrost(at)snowman(dot)net>
To: Dan Harris <fbsd(at)drivefaster(dot)net>
Cc: pgsql-performance <pgsql-performance(at)postgresql(dot)org>
Subject: Re: large dataset with write vs read clients
Date: 2010-10-07 18:57:48
Message-ID: 20101007185748.GZ26232@tamriel.snowman.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

* Dan Harris (fbsd(at)drivefaster(dot)net) wrote:
> On 10/7/10 11:47 AM, Aaron Turner wrote:
>> Basically, each connection is taking about 100MB resident. As we need
>> to increase the number of threads to be able to query all the devices
>> in the 5 minute window, we're running out of memory.
> I think the first thing to do is look into using a connection pooler
> like pgpool to reduce your connection memory overhead.

Yeah.. Having the number of database connections be close to the number
of processors is usually recommended.

Stephen

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Stephen Frost 2010-10-07 19:00:06 Re: large dataset with write vs read clients
Previous Message Dan Harris 2010-10-07 18:29:38 Re: large dataset with write vs read clients