From: | Merlin Moncure <mmoncure(at)gmail(dot)com> |
---|---|
To: | Zbigniew <zbigniew2011(at)gmail(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Problem with aborting entire transactions on error |
Date: | 2012-12-11 15:01:26 |
Message-ID: | CAHyXU0xRDqLvTLxUw3TY1upRRC8_GTHWgtn0y=g1-XT8CLeq6g@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Sun, Dec 9, 2012 at 9:20 PM, Zbigniew <zbigniew2011(at)gmail(dot)com> wrote:
> Hello,
>
> As I read while googling the web, many people complained about this
> before. Couldn't it be made optional (can be even with "default ON")?
> I understand, that there are situations, when it is a must - for
> example, when the rest of queries rely on the result of first ones -
> but there are numerous situations, when just skipping a faulty query
> is all we need.
>
> A simple - but very common - example: I wanted to perform really large
> number of inserts - using transaction, to make it faster - while being
> sure the duplicate entries will be skipped. Of course, this job will
> be done best by server itself, which is keeping an eye on "primary
> key" of the table. Unfortunately: not during a transaction! Any "dupe"
> will trash thousands other (proper) entries immediately.
>
> Why is this? My guess is, there is kind of logic in the code, like this:
>
> if { no error during query } {
> do it
> } else {
> withdraw this one
> rollback entire transaction
> }
>
> Therefore my request - and, as I saw, of many others - would be just
> to introduce a little change:
>
> if { no error during query } {
> do it
> } else {
> withdraw this one
> if { ROLLBACK_ON_ERROR } {
> rollback entire transaction
> }
> }
>
> (if there's no ROLLBACK_ON_ERROR - it should carry on with the
> remaining queries)
>
> Is it really so problematic to introduce such code change, allowing
> the users to control this behaviour? Yes, I read about using
> "savepoints" - but I think we agree, it's just cumbersome workaround -
> and not real solution, like my proposal. All we need is either a
> variable to set, or a command, that will allow to modify the present
> functionality in the way described above.
No. The point of transactions is to define a 'unit of work' -- all
aspects of the transaction must pass or must all fail. there are
limited mechanisms to work around this (SAVEPOINTs) the allow finer
control but still maintain that basic principle.
Consolidating bulk inserts into a single transaction is widely done
for performance reasons. I understand your pain point, but you're
much better off in the long run by using various well established
methods such as staging data before inserting it instead of hacking
the transaction rules.
merlin
From | Date | Subject | |
---|---|---|---|
Next Message | Mihai Popa | 2012-12-11 15:28:12 | Re: large database |
Previous Message | David Johnston | 2012-12-11 14:33:39 | Re: Problem with aborting entire transactions on error |