Re: how to serialize insert followed by read(select) by different clients

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Sandeep Gupta <gupta(dot)sandeep(at)gmail(dot)com>
Cc: "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org>
Subject: Re: how to serialize insert followed by read(select) by different clients
Date: 2016-08-07 15:43:18
Message-ID: 21823.1470584598@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Sandeep Gupta <gupta(dot)sandeep(at)gmail(dot)com> writes:
> First program, lets say the writer, (using psql) appends to a table
> in the database.
> Second program, the reader, (python using alchemy) reads the data.
> This happens in loop, one for each day.The programs are fired in that order
> and the first program always commits after it inserts new rows.

> The problem is that the second program does not see the updates of the first
> program consistently.

There are only two possible explanations for that:

1. The writer isn't actually issuing a COMMIT when you think it is.

2. The reader is using a stale snapshot, ie it's using SERIALIZABLE
or REPEATABLE READ transaction mode and its transaction started before
the writer committed.

If you're having trouble identifying the cause of the problem you
might try setting "log_statement = all" and looking at where BEGINs
and COMMITs get issued.

(Well, I guess that only exhausts the possibilities as long as this is
happening on a single database server. If the reader is reading from
a hot-standby slave then replication delays might explain your problem.
But that would be a rather material omission of facts.)

regards, tom lane

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Philippe Girolami 2016-08-07 17:26:27 Re: Should a DB vacuum use up a lot of space ?
Previous Message Adrian Klaver 2016-08-07 15:20:09 Re: Should a DB vacuum use up a lot of space ?