Re: Handling small inserts from many connections.

From: Jeff Janes <jeff(dot)janes(at)gmail(dot)com>
To: 우성민 <dntjdals0513(at)gmail(dot)com>
Cc: "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Handling small inserts from many connections.
Date: 2017-09-04 22:27:06
Message-ID: CAMkU=1yjYi93b9BnAn_mD8tJnnbRZmqK6jn0RzYQRu-pW_x=8Q@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On Mon, Sep 4, 2017 at 1:14 AM, 우성민 <dntjdals0513(at)gmail(dot)com> wrote:

> Hi team,
>
> I'm trying to configure postgres and pgbouncer to handle many inserts from
> many connections.
>
> Here's some details about what i want to achieve :
>
> We have more than 3000 client connections, and my server program forks
> backend process for each client connections.
> If backend processes send a request to its connected client, the client
> send some text data(about 3000 bytes) to the backend process and wait for
> next request.
> The backend process execute insert text data using PQexec from libpq
> lbirary, if PQexec is done, backend process send request to
> client again.
>
> All the inserts using one, same table.
>
> The problem is, clients wait too long due to insert process is too slow.
> It seems to working fine at first, but getting slows down after couple of
> hours,
> each insert query takes 3000+ ms and keep growing.
>

If it takes a couple hours for it to slow down, then it sounds like you
have a leak somewhere in your code.

Run "top" and see who is using the CPU time (or the io wait time, if that
is what it is, and the memory)

Cheers,

Jeff

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message 우성민 2017-09-04 22:54:03 Re: Handling small inserts from many connections.
Previous Message Michaeldba@sqlexec.com 2017-09-04 22:06:24 Re: Handling small inserts from many connections.