Re: Big number of connections

From: Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com>
To: Mike Sofen <msofen(at)runbox(dot)com>
Cc: Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, jarek <jarek(at)poczta(dot)srv(dot)pl>, "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Big number of connections
Date: 2016-04-04 13:33:32
Message-ID: CAFj8pRDqLJZ8M9NRKUywoQj72RUrNbsJ2nbCx=TutqxB_0rgOg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Hi

2016-04-04 15:14 GMT+02:00 Mike Sofen <msofen(at)runbox(dot)com>:

> From: Jim Nasby Sent: Sunday, April 03, 2016 10:19 AM
>
> >>On 4/1/16 2:54 AM, jarek wrote:
> >> I'll be happy to hear form users of big PostgreSQL installations, how
> >> many users do you have and what kind of problems we may expect.
> >> Is there any risk, that huge number of roles will slowdown overall
> >> performance ?
>
> >Assuming you're on decent sized hardware though, 3000-4000 open
> connections shouldn't be much of an >issue *as long as very few are active
> at once*. If you get into a situation where there's a surge of activity
> >and you suddenly have 2x more active connections than cores, you won't be
> happy. I've seen that push >servers into a state where the only way to
> recover was to disconnect everyone.
> >--
> >Jim Nasby
>
> Jim - I don't quite understand the math here: on a server with 20 cores,
> it can only support 40 active users?
>
> I come from the SQL Server world where a single 20 core server could
> support hundreds/thousands of active users and/or many dozens of
> background/foreground data processes. Is there something fundamentally
> different between the two platforms relative to active user loads? How
> would we be able to use Postgres for larger web apps?
>

PostgreSQL doesn't contain integrated pooler - so any connection to
Postgres enforces one PostgreSQL proces. A performance benchmarks is
showing maximum performance about 10x cores. With high number of
connections you have to use low size of work_mem, what enforces can have
negative impact on performance too. Too high number of active PostgreSQL
processes increase a risk of performance problems with spin locks, etc.

Usually Web frameworks has own pooling solution - so just use it. If you
need more logical connection than is optimum against number of cores, then
you should to use external pooler like pgpool II or pgbouncer.

http://www.pgpool.net/mediawiki/index.php/Main_Page
http://pgbouncer.github.io/

Pgbouncer is light with only necessary functions, pgpool is little bit
heavy with lot of functions.

Regards

Pavel

>
> Mike Sofen
>
>
>
>
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance(at)postgresql(dot)org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance
>

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Moreno Andreo 2016-04-04 14:43:07 Re: Big number of connections
Previous Message Mike Sofen 2016-04-04 13:14:36 Re: Big number of connections