From: | Richard Huxton <dev(at)archonet(dot)com> |
---|---|
To: | Flavio Palumbo <f(dot)palumbo(at)silmasoftware(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Commit strange behavior |
Date: | 2008-11-18 09:36:53 |
Message-ID: | 49228CB5.4050001@archonet.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Flavio Palumbo wrote:
>
> Unfortunately postgres seems to work in a different way, cause if there is
> just one error while the transaction is active I'm not able to commit the
> well formed data in the db, no matter if the good records were inserted
> sooner or later the error.
>
> Does this behavior appears right for postgres ?
Yes. Either everything in a transaction works or nothing does. That is
the nature of a transaction. Some systems weaken this rule to allow
certain types of error but not others, but PostgreSQL just says
all-or-nothing.
> There is any way or workaround to achieve my goal ?
You could look into savepoints, that allows you to rollback the
transaction to a known point and then continue from there. So:
BEGIN;
SAVEPOINT foo;
...insert 1000 rows...
SAVEPOINT foo; -- now we can rollback to here
...insert 1000 rows...
SAVEPOINT foo; -- now we can rollback to here
...insert 173 rows, get an error...
ROLLBACK TO SAVEPOINT foo;
...insert the 172 rows that worked, skip the next one, and continue
...
if number of errors < threshold then COMMIT else ROLLBACK
What number of rows to put in a block (e.g. 1000) will depend on how
many errors you expect to have. One common tactic is to start off small
(say 100) and increase the block size for every block that was OK, and
decrease the block size every time you hit an error.
HTH
--
Richard Huxton
Archonet Ltd
From | Date | Subject | |
---|---|---|---|
Next Message | Richard Huxton | 2008-11-18 09:46:59 | Re: MS Access and PostgreSQL - a warning to people thinking about it |
Previous Message | Andy Greensted | 2008-11-18 09:34:05 | Connecting to old 7.1 Database |