Re: table as log (multiple writers and readers)

From: Joris Dobbelsteen <joris(at)familiedobbelsteen(dot)nl>
To: Andrew Sullivan <ajs(at)crankycanuck(dot)ca>, pgsql-general(at)postgresql(dot)org
Subject: Re: table as log (multiple writers and readers)
Date: 2008-04-21 21:49:39
Message-ID: 480D0BF3.8030104@familiedobbelsteen.nl
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Andrew Sullivan wrote:
> On Thu, Apr 17, 2008 at 12:44:51PM +0800, Craig Ringer wrote:
>> It won't work with multiple concurrent writers. There is no guarantee
>> that an INSERT with a timestamp older than the one you just saw isn't
>> waiting to commit.
>
> This is pretty unlikely -- I won't say impossible, because I'm sure there's
> some kernel-level race condition -- if you use the clock time approach and
> SERIALIZABLE mode.

Don't, understand what SERIALIZABLE mode means (in this context):
* It makes restrictions on what you CAN & MAY read (i.e. controls
visiblity).

If you want to serialize your accesses (which is something completely
different) you are probably better of with using an exclusive table lock
on the log table. Otherwise guarantees cannot be made on postgresql.
Even worse, you MUST NOT use SERIALIZABLE mode if you are going to check
you log table.

Similarly, handing referential integrity is a complex to handle and has
special mechanisms in postgresql to handle that.

Going into detail about concurrently running code:
* Without synchronization mechanisms you CANNOT control in which order
code is executed.

Also, you are not dealing with a race condition here. Its the semantics
or requirements on the ordering. Race conditions are failures caused by
a particular ordering (in time) of events, in such a way that corrupts
the state. There can be prevented by proper locking, but have been
overlooked (which is easy) and timing constraints are usually very
stringent so that they are very hard to detect. Therefore, they are
also, usually, very hard to encounter in common situations.

> You could add a trigger that checks for other timestamps
> < yours, I suppose. Of course, that's pretty heavyweight, too. How much is
> the absolute serialization worth to you in performance?

You cannot do this in a reliable way. The Postgresql MVCC semantics are
such that you can not do that in a reliable manner. The only way is if
you take at least a write lock on the log table and ensure you are NOT
in serializable mode.

Remember that you CANNOT view tuples that are not yet committed. In
fact, with serializable mode you will only see changes from transactions
that are completed BEFORE you started yours.

> The only other thing I can suggest is what someone else did: commit them
> with wallclock timestamps, and then have a different thread wake up every
> _n_ seconds and put the records into the proper table in timestamp order.

You can guarantee:
* the proper ordering of log events for a single transaction (by
property of transactions).
* modifications/delete to a single tuple (due to the synchronization
point caused by the write lock).
This seems enough in the requested case.

You can even do so with a simple sequence counter if you are only
interested the of ordering in time of events.

- Joris

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Joris Dobbelsteen 2008-04-21 21:59:04 Re: table as log (multiple writers and readers)
Previous Message Dirk Verleysen 2008-04-21 21:31:55 PostgreSQL on Vista