From: | Haller Christoph <ch(at)rodos(dot)fzk(dot)de> |
---|---|
To: | christof(at)petig-baender(dot)de (Christof Petig) |
Cc: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Abort transaction on duplicate key error |
Date: | 2001-09-27 12:59:47 |
Message-ID: | 200109271059.MAA29400@rodos |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Thanks a lot. Now that I've read your message,
I wonder why I was asking something trivial.
Christoph
> > In a C application I want to run several
> > insert commands within a chained transaction
> > (for faster execution).
> > >From time to time there will be an insert command
> > causing an
> > ERROR: Cannot insert a duplicate key into a unique index
> >
> > As a result, the whole transaction is aborted and all
> > the previous inserts are lost.
> > Is there any way to preserve the data
> > except working with "autocommit" ?
> > What I have in mind particularly is something like
> > "Do not abort on duplicate key error".
>
> Simply select by the key you want to enter. If you get 100 an insert is ok,
> otherwise do an update. Oracle has a feature called 'insert or update' which
> follows this strategy. There also was some talk on this list about
> implementing this, but I don't remember the conclusion.
>
> BTW: I strongly recommend staying away from autocommit. You cannot
> control/know whether/when you started a new transaction.
>
> Christof
>
> PS: I would love to have nested transactions, too. But no time to spare ...
> Perhaps somebody does this for 7.3?
>
From | Date | Subject | |
---|---|---|---|
Next Message | mlw | 2001-09-27 14:02:05 | Re: Spinlock performance improvement proposal |
Previous Message | Tatsuo Ishii | 2001-09-27 12:30:38 | Re: multibyte performance |