From: | "Kevin Grittner" <kgrittn(at)mail(dot)com> |
---|---|
To: | "Catalin Iacob" <iacobcatalin(at)gmail(dot)com>,pgsql-performance(at)postgresql(dot)org |
Subject: | Re: How to keep queries low latency as concurrency increases |
Date: | 2012-10-30 11:55:54 |
Message-ID: | 20121030115554.306900@gmx.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Catalin Iacob wrote:
> Hardware:
> Virtual machine running on top of VMWare
> 4 cores, Intel(R) Xeon(R) CPU E5645 @ 2.40GHz
> 4GB of RAM
You should carefully test transaction-based pools limited to around 8
DB connections. Experiment with different size limits.
http://wiki.postgresql.org/wiki/Number_Of_Database_Connections
> Disk that is virtual enough that I have no idea what it is, I know
> that there's some big storage shared between multiple virtual
> machines. Filesystem is ext4 with default mount options.
Can you change to noatime?
> pgbouncer 1.4.2 installed from Ubuntu's packages on the same
> machine as Postgres. Django connects via TCP/IP to pgbouncer (it
> does one connection and one transaction per request) and pgbouncer
> keeps connections open to Postgres via Unix socket. The Python
> client is self compiled psycopg2-2.4.5.
Is there a good transaction-based connection pooler in Python? You're
better off with a good pool built in to the client application than
with a good pool running as a separate process between the client and
the database, IMO.
> random_page_cost | 2
For fully cached databases I recommend random_page_cost = 1, and I
always recommend cpu_tuple_cost = 0.03.
-Kevin
From | Date | Subject | |
---|---|---|---|
Next Message | Vincenzo Melandri | 2012-10-30 12:15:10 | Seq scan on 10million record table.. why? |
Previous Message | Tatsuo Ishii | 2012-10-30 10:08:59 | Re: out of memory |