From: | Andrew Sullivan <andrew(at)libertyrms(dot)info> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Transaction Exception Question |
Date: | 2002-08-14 18:12:05 |
Message-ID: | 20020814141205.R15973@mail.libertyrms.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Wed, Aug 14, 2002 at 08:50:32AM -0700, Jon Swinth wrote:
>
> In the example I gave, the record is already there but the second client
> cannot see it yet (not commited) so it attempts an insert too. If the first
> client is successful and commits then the second client will get an SQL error
> on insert for duplicate key. In Postgre currently this required that the
> second client rollback everything in the transaction when it would be a
> simple matter to catch the duplicate key error, select back the record, and
> update it.
Could you cache the locally-submitted bits from previously in the
transaction, and then resubmit them as part of a new transaction? I
know that's not terribly efifcient, but if you _really_ need
transactions running that long, it may be the only way until
savepoints are added.
I wonder, however, if this isn't one of those cases where proper
theory-approved normalisation is the wrong way to go. Maybe you need
an order-submission queue table to keep contention low on the
(products? I think that was your example) table.
A
--
----
Andrew Sullivan 87 Mowat Avenue
Liberty RMS Toronto, Ontario Canada
<andrew(at)libertyrms(dot)info> M6K 3E3
+1 416 646 3304 x110
From | Date | Subject | |
---|---|---|---|
Next Message | Andrew Sullivan | 2002-08-14 18:16:18 | Re: Evaluating PostgreSQL for Production Web App |
Previous Message | Joe Conway | 2002-08-14 18:09:41 | Re: [HACKERS] [GENERAL] workaround for lack of REPLACE() function |