Re: xmin and very high number of concurrent transactions

From: Laurenz Albe <laurenz(dot)albe(at)cybertec(dot)at>
To: Vijaykumar Jain <vjain(at)opentable(dot)com>, pgsql-general(at)lists(dot)postgresql(dot)org
Subject: Re: xmin and very high number of concurrent transactions
Date: 2019-03-13 08:49:40
Message-ID: a7a3fc9d64b2884e2108e39189ec1aa943097b1b.camel@cybertec.at
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Vijaykumar Jain wrote:
> I was asked this question in one of my demos, and it was interesting one.
>
> we update xmin for new inserts with the current txid.
> now in a very high concurrent scenario where there are more than 2000
> concurrent users trying to insert new data,
> will updating xmin value be a bottleneck?
>
> i know we should use pooling solutions to reduce concurrent
> connections but given we have enough resources to take care of
> spawning a new process for a new connection,

You can read the function GetNewTransactionId in
src/backend/access/transam/varsup.c for details.

Transaction ID creation is serialized with a "light-weight lock",
so it could potentially be a bottleneck.

Often that is dwarfed by the I/O requirements from many concurrent
commits, but if most of your transactions are rolled back or you
use "synchronous_commit = off", I can imagine that it could matter.

It is not a matter of how many clients there are, but of how
often a new writing transaction is started.

Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Julien Rouhaud 2019-03-13 09:14:01 Re: xmin and very high number of concurrent transactions
Previous Message Jahwan Kim 2019-03-13 07:02:58 PostgreSQL temp table blues