Handling small inserts from many connections.

From: 우성민 <dntjdals0513(at)gmail(dot)com>
To: pgsql-performance(at)postgresql(dot)org
Subject: Handling small inserts from many connections.
Date: 2017-09-04 08:14:39
Message-ID: CABdtbz0LZOEXYAh6f=PpLvF57jk2B4UOH2y5ETtHpv7ibu_17A@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Hi team,

I'm trying to configure postgres and pgbouncer to handle many inserts from
many connections.

Here's some details about what i want to achieve :

We have more than 3000 client connections, and my server program forks
backend process for each client connections.
If backend processes send a request to its connected client, the client
send some text data(about 3000 bytes) to the backend process and wait for
next request.
The backend process execute insert text data using PQexec from libpq
lbirary, if PQexec is done, backend process send request to
client again.

All the inserts using one, same table.

The problem is, clients wait too long due to insert process is too slow.
It seems to working fine at first, but getting slows down after couple of
hours,
each insert query takes 3000+ ms and keep growing.

Need some help to figure out an actual causes of this problem.

System information :
PGBouncer 1.7.2.
PostgreSQL 9.6.3 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.4.7
20120313 (Red Hat 4.4.7-18), 64-bit on CentOS release 6.9 (Final).
Kernel version 2.6.32-696.10.1.el6.x86_64
Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz processor.
32GB ECC/REG-Buffered RAM.
128GB Samsung 840 evo SSD.

Attachment Content-Type Size
pgbouncer.ini.txt text/plain 82 bytes
postgresql.conf.txt text/plain 1.0 KB

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Michael Vitale 2017-09-04 11:57:09 Re: Handling small inserts from many connections.
Previous Message George Neuner 2017-08-31 13:24:35 Re: printing results of query to file in different times