From: | Douglas McNaught <doug(at)mcnaught(dot)org> |
---|---|
To: | snacktime <snacktime(at)gmail(dot)com> |
Cc: | "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: implementing a read lock |
Date: | 2006-08-26 20:22:57 |
Message-ID: | 87zmdr9rha.fsf@suzuka.mcnaught.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
snacktime <snacktime(at)gmail(dot)com> writes:
> I have an application that processes credit card transactions,and
> contains a table called authorizations. The authorizations table
> contains information returned by the bank necessary to capture the
> transaction. Nothing should block the application from inserting new
> rows into the authorizations table. When the authorizations are
> captured, one or more rows will be fetched, captured, and if
> successful the rows will be deleted. No updates are done on the
> table, only inserts or deletes. Naturally I want to prevent
> different instances of the same application from trying to select the
> same rows to capture, resulting in duplicate charges. I can't lock
> the whole table because new authorizations are constantly coming in.
> Is creating a separate table that I use just as a lock table the best
> approach?
I'm not quite sure why SELECT FOR UPDATE wouldn't work for you. The
capturing process would SELECT candidate transactions (where the
status is 'NEW', say) FOR UPDATE, mark their status as 'PROCESSING',
then COMMIT and try to process each transaction. Once the status is
known it can mark the row as 'DONE' or 'FAILED'.
SELECT FOR UPDATE doesn't block inserts of new rows.
-Doug
From | Date | Subject | |
---|---|---|---|
Next Message | Michael Fuhr | 2006-08-26 20:36:28 | Re: Inserting Data |
Previous Message | Bob Pawley | 2006-08-26 20:22:49 | Re: Inserting Data |