From: | Peter Geoghegan <peter(dot)geoghegan86(at)gmail(dot)com> |
---|---|
To: | Karl Pickett <karl(dot)pickett(at)gmail(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Can Postgres Not Do This Safely ?!? |
Date: | 2010-10-29 07:53:03 |
Message-ID: | AANLkTim-Pv3E0Q5Od-ue9P1p3R7Tboj1n1D1u5rkyAi4@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On 29 October 2010 03:04, Karl Pickett <karl(dot)pickett(at)gmail(dot)com> wrote:
> Hello Postgres Hackers,
>
> We have a simple 'event log' table that is insert only (by multiple
> concurrent clients). It has an integer primary key. We want to do
> incremental queries of this table every 5 minutes or so, i.e. "select
> * from events where id > LAST_ID_I_GOT" to insert into a separate
> reporting database. The problem is, this simple approach has a race
> that will forever skip uncommitted events. I.e., if 5000 was
> committed sooner than 4999, and we get 5000, we will never go back and
> get 4999 when it finally commits. How can we solve this? Basically
> it's a phantom row problem but it spans transactions.
>
> I looked at checking the internal 'xmin' column but the docs say that
> is 32 bit, and something like 'txid_current_snapshot' returns a 64 bit
> value. I don't get it. All I want to is make sure I skip over any
> rows that are newer than the oldest currently running transaction.
> Has nobody else run into this before?
If I understand your question correctly, you want a "gapless" PK:
http://www.varlena.com/GeneralBits/130.php
--
Regards,
Peter Geoghegan
From | Date | Subject | |
---|---|---|---|
Next Message | Jan C. | 2010-10-29 09:10:16 | pg_restore -t table doesn't restore PKEY |
Previous Message | Jacqui Caren-home | 2010-10-29 07:46:33 | create table as select VS create table; insert as select |