Re: Transaction Exception Question

From: Jon Swinth <jswinth(at)atomicpc(dot)com>
To: Andrew Sullivan <andrew(at)libertyrms(dot)info>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: Transaction Exception Question
Date: 2002-08-14 18:40:50
Message-ID: 200208141840.g7EIeoM06636@localhost.localdomain
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Thanks Andrew for your reply.

You confused me at first. I guess the second paragraph was on my issue about
FK triggers using write locks on parent tables.

Your right in that I may need a work around for transactions being forced to
rollback on exception. Savepoints may indeed be the answer I am looking for.
Although I would like to see them implemented internally in the DB so that
the transaction automatically goes back to the point just before the
exception. I have already accepted the fact that the DB will be this way for
a while. The purpose of the original e-mail was to see if things were this
way for technical reasons (which would mean this could be added to the todo
list) or for idealistic reasons.

As for the FK issue. An order queue isn't feasable because of a current
requirement that the customer receive immediate feedback if the credit card
is declined and I can't contact to the credit card company without a concrete
order number (keeping in mind that some customers will hit back on their
browser and try to submit again). I would illiminate a lot of contention if
I could do the credit card authorization later and just cancel the order.

As for de-normalizing the DB. Product is only one of the FK fields in
contention. There is also order status, carrier, carrier service, inv type,
inv status, and others. If I have to disable all the FK's to make things
work, why did I insist in a DB with foreign keys in the first place?

I am rasing these issues because I think PostgreSQL can be a serious
contender on high volume applications. I just don't want to have to trade
good DB and application design for speed.

On Wednesday 14 August 2002 11:12 am, Andrew Sullivan wrote:
> On Wed, Aug 14, 2002 at 08:50:32AM -0700, Jon Swinth wrote:
> > In the example I gave, the record is already there but the second client
> > cannot see it yet (not commited) so it attempts an insert too. If the
> > first client is successful and commits then the second client will get an
> > SQL error on insert for duplicate key. In Postgre currently this
> > required that the second client rollback everything in the transaction
> > when it would be a simple matter to catch the duplicate key error, select
> > back the record, and update it.
>
> Could you cache the locally-submitted bits from previously in the
> transaction, and then resubmit them as part of a new transaction? I
> know that's not terribly efifcient, but if you _really_ need
> transactions running that long, it may be the only way until
> savepoints are added.
>
> I wonder, however, if this isn't one of those cases where proper
> theory-approved normalisation is the wrong way to go. Maybe you need
> an order-submission queue table to keep contention low on the
> (products? I think that was your example) table.
>
> A

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Don Isgitt 2002-08-14 18:45:48 pgaccess error message
Previous Message Roderick A. Anderson 2002-08-14 18:36:39 Re: Sourceforge moving to DB2