From: | Lincoln Yeoh <lyeoh(at)pop(dot)jaring(dot)my> |
---|---|
To: | Florian Weimer <fweimer(at)bfk(dot)de> |
Cc: | <gnanam(at)zoniac(dot)com>,"'Atul Goel'" <Atul(dot)Goel(at)iggroup(dot)com>, <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: On duplicate ignore |
Date: | 2012-01-20 11:12:34 |
Message-ID: | 20120120111259.736D61A7958A@mail.postgresql.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
At 04:27 PM 1/20/2012, Florian Weimer wrote:
>* Lincoln Yeoh:
>
> >>If you use serializable transactions in PostgreSQL 9.1, you can
> >>implement such constraints in the application without additional
> >>locking. However, with concurrent writes and without an index, the rate
> >>of detected serialization violations and resulting transactions aborts
> >>will be high.
> >
> > Would writing application-side code to handle those transaction aborts
> > in 9.1 be much easier than writing code to handle transaction
> > aborts/DB exceptions due to unique constraint violations? These
> > transaction aborts have to be handled differently (e.g. retried for X
> > seconds/Y tries) from other sort of transaction aborts (not retried).
>
>There's a separate error code, so it's easier to deal with in theory.
Is there a simple way to get postgresql to retry a transaction, or
does the application have to actually reissue all the necessary
statements again?
I'd personally prefer to use locking and selects to avoid transaction
aborts whether due to unique constraint violations or due to
serialization violations.
But I'm lazy ;).
Regards,
Link.
From | Date | Subject | |
---|---|---|---|
Next Message | Brice Maron | 2012-01-20 11:15:16 | Immutable function with bind value |
Previous Message | Andreas Lubensky | 2012-01-20 11:00:56 | ODBC and bytea |