Re: Postgre Eating Up Too Much RAM

From: "Kevin Grittner" <kgrittn(at)mail(dot)com>
To: "Aaron Bono" <aaron(dot)bono(at)aranya(dot)com>,"Postgres" <pgsql-admin(at)postgresql(dot)org>
Subject: Re: Postgre Eating Up Too Much RAM
Date: 2012-11-14 10:49:02
Message-ID: 20121114104902.90160@gmx.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

Aaron Bono wrote:

> (there are currently a little over 200 active connections to the
> database):

How many cores do you have on the system? What sort of storage
systeme? What, exactly, are the symptoms of the problem? Are there
200 active connections when the problem occurs? By "active", do you
mean that there is a user connected or that they are actually running
something?

http://wiki.postgresql.org/wiki/Guide_to_reporting_problems

> max_connections = 1000

If you want to handle a large number of clients concurrently, this is
probably the wrong way to go about it. You will probably get better
performance with a connection pool.

http://wiki.postgresql.org/wiki/Number_Of_Database_Connections

> shared_buffers = 256MB

Depending on your workload, a Linux machine with 32GB RAM should
probably have this set somewhere between 1GB and 8GB.

> vacuum_cost_delay = 20ms

Making VACUUM less aggressive usually backfires and causes
unacceptable performance, although that might not happen for days or
weeks after you make the configuration change.

By the way, the software is called PostgreSQL. It is often shortened
to Postgres, but "Postgre" is just wrong.

-Kevin

Responses

Browse pgsql-admin by date

  From Date Subject
Next Message Frank Cavaliero 2012-11-14 21:11:58 Failed Login Attempts parameter
Previous Message Gunnar "Nick" Bluth 2012-11-14 06:32:28 Re: Postgre Eating Up Too Much RAM