Re: Streaming Replica Master-Salve Config.

From: John R Pierce <pierce(at)hogranch(dot)com>
To: pgsql-general(at)postgresql(dot)org
Subject: Re: Streaming Replica Master-Salve Config.
Date: 2016-08-05 19:43:43
Message-ID: cd0d8c85-334a-2120-b1a8-2556583fe82b@hogranch.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On 8/4/2016 9:15 AM, Eduardo Morras wrote:
> If you set max_connections too high, those connections will compete/figth for same resources, CPU processing, I/O to disks, Memory and caches, Locks, and postgres will spend more time managing the resources than doing real work. Believe me (or us) set it as we say and use a bouncer like pgbouncer. It can run on the same server.

idle connections only use a small amount of memory, a process, a socket,
and some file handles. when you have multiple databases, its
impossible to share a connection pool across them.

the OP is talking about having 350 'tenants' each with their own
database and user on a single server.

your 1 connection per core suggestion is ludicrious for this
scenario. in many database applications, most connections are idle
most of the time. sure you don't want much over about 2-4X your cpu
thread count actually active doing queries at the same time if you want
the max transaction/second aggregate throughput, but you can still get
acceptable performance several times higher than that, depending on the
workload, in my benchmarks the aggregate TPS rolls off fairly slowly for
quite a ways past the 2-4 connections per hardware thread or core level,
at least doing simple OLTP stuff on a high concurrency storage system
(lots of fast disks in raid10)

--
john r pierce, recycling bits in santa cruz

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Christian Ohler 2016-08-05 19:48:59 Re: Detecting if current transaction is modifying the database
Previous Message Alex Ignatov 2016-08-05 19:35:24 Re: Detecting if current transaction is modifying the database