From: | "David Wilson" <david(dot)t(dot)wilson(at)gmail(dot)com> |
---|---|
To: | "Vance Maverick" <vmaverick(at)pgp(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: table as log (multiple writers and readers) |
Date: | 2008-04-16 19:27:24 |
Message-ID: | e7f9235d0804161227o65d1b3eap23e70dda6854415c@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
(I originally missed replying to all here; sorry about the duplicate,
Vance, but figured others might be interested.
On Wed, Apr 16, 2008 at 1:55 PM, Vance Maverick <vmaverick(at)pgp(dot)com> wrote:
>
> Another approach would be to queue the log entries in a "staging" table,
> so that a single process could move them into the log. This is fairly
> heavyweight, but it would guarantee the consistent sequencing of the log
> as seen by a reader (even if the order of entries in the log didn't
> always reflect the true commit sequence in the staging table). I'm
> hoping someone knows a cleverer trick.
Consider a loop like the following
advisory lock staging table
if (entries in table)
copy entries to main log table as a single transaction
release advisory lock on staging table
read out and handle most recent log entries from main table
The advisory lock is automatically released on client disconnect, and
doing the whole thing within one transaction should prevent any
partial-copies on failures.
It doesn't matter that there are concurrent inserts to the staging
table because the staging table is always wiped all at once and
transferred in a synchronous fashion to the main table. You also can't
lose data, because it's always in one of the two tables.
--
- David T. Wilson
david(dot)t(dot)wilson(at)gmail(dot)com
From | Date | Subject | |
---|---|---|---|
Next Message | Alvaro Herrera | 2008-04-16 19:27:33 | Re: "vacuum" and "cluster" |
Previous Message | Jimmy Choi | 2008-04-16 19:21:20 | Re: "vacuum" and "cluster" |