From: | Karl Pickett <karl(dot)pickett(at)gmail(dot)com> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Can Postgres Not Do This Safely ?!? |
Date: | 2010-10-29 02:04:48 |
Message-ID: | AANLkTi=rRkR-EgPkMLTVVukJV3X-eUCbk8fkva6X0HZY@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hello Postgres Hackers,
We have a simple 'event log' table that is insert only (by multiple
concurrent clients). It has an integer primary key. We want to do
incremental queries of this table every 5 minutes or so, i.e. "select
* from events where id > LAST_ID_I_GOT" to insert into a separate
reporting database. The problem is, this simple approach has a race
that will forever skip uncommitted events. I.e., if 5000 was
committed sooner than 4999, and we get 5000, we will never go back and
get 4999 when it finally commits. How can we solve this? Basically
it's a phantom row problem but it spans transactions.
I looked at checking the internal 'xmin' column but the docs say that
is 32 bit, and something like 'txid_current_snapshot' returns a 64 bit
value. I don't get it. All I want to is make sure I skip over any
rows that are newer than the oldest currently running transaction.
Has nobody else run into this before?
Thank you very much.
--
Karl Pickett
From | Date | Subject | |
---|---|---|---|
Next Message | Alex Hunsaker | 2010-10-29 03:23:15 | Re: MySQL -> PostgreSQL conversion issue |
Previous Message | Scott Newton | 2010-10-29 02:02:14 | Re: MySQL -> PostgreSQL conversion issue |