From: | Kevin Grittner <kgrittn(at)ymail(dot)com> |
---|---|
To: | Merlin Moncure <mmoncure(at)gmail(dot)com> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Josh Berkus <josh(at)agliodbs(dot)com>, "nikita(dot)y(dot)volkov(at)mail(dot)ru" <nikita(dot)y(dot)volkov(at)mail(dot)ru>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: BUG #12330: ACID is broken for unique constraints |
Date: | 2014-12-29 14:03:28 |
Message-ID: | 1956080374.1363800.1419861808211.JavaMail.yahoo@jws100109.mail.ne1.yahoo.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-bugs pgsql-hackers |
Merlin Moncure <mmoncure(at)gmail(dot)com> wrote:
> On Fri, Dec 26, 2014 at 12:38 PM, Kevin Grittner <kgrittn(at)ymail(dot)com> wrote:
>> Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>>
>>> Just for starters, a 40XXX error report will fail to provide the
>>> duplicated key's value. This will be a functional regression,
>>
>> Not if, as is normally the case, the transaction is retried from
>> the beginning on a serialization failure. Either the code will
>> check for a duplicate (as in the case of the OP on this thread) and
>> they won't see the error, *or* the the transaction which created
>> the duplicate key will have committed before the start of the retry
>> and you will get the duplicate key error.
>
> I'm not buying that; that argument assumes duplicate key errors are
> always 'upsert' driven. Although OP's code may have checked for
> duplicates it's perfectly reasonable (and in many cases preferable) to
> force the transaction to fail and report the error directly back to
> the application. The application will then switch on the error code
> and decide what to do: retry for deadlock/serialization or abort for
> data integrity error. IOW, the error handling semantics are
> fundamentally different and should not be mixed.
I think you might be agreeing with me without realizing it. Right
now you get "duplicate key error" even if the duplication is caused
by a concurrent transaction -- it is not possible to check the
error code (well, SQLSTATE, technically) to determine whether this
is fundamentally a serialization problem. What we're talking about
is returning the serialization failure return code for the cases
where it is a concurrent transaction causing the failure and
continuing to return the duplicate key error for all other cases.
Either I'm not understanding what you wrote above, or you seem to
be arguing for being able to distinguish between errors caused by
concurrent transactions and those which aren't.
--
Kevin GrittnerEDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Merlin Moncure | 2014-12-29 14:47:35 | Re: BUG #12330: ACID is broken for unique constraints |
Previous Message | Heikki Linnakangas | 2014-12-29 09:38:12 | Re: BUG #12292: index row size 1480 exceeds maximum 1352 for index |
From | Date | Subject | |
---|---|---|---|
Next Message | Merlin Moncure | 2014-12-29 14:47:35 | Re: BUG #12330: ACID is broken for unique constraints |
Previous Message | Abhijit Menon-Sen | 2014-12-29 13:14:18 | Re: What exactly is our CRC algorithm? |