From: | "Matt Van Mater" <nutter_(at)hotmail(dot)com> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | disabling autocommit |
Date: | 2004-08-11 18:24:59 |
Message-ID: | BAY9-F85GhsXRhBPnAF000199ef@hotmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
I'm looking to get a little more performance out of my database, and saw in
the docs a section about disabling autocommit by using the BEGIN and COMMIT
keywords.
My problem is this: I enforce unique rows for all data, and occasionally
there is an error where I try to insert a duplicate entry. I expect to see
these duplicate entries and depend on the DB to enforce the row uniqueness.
When I just run the insert statements without the begin and commit keywords
the insert only fails for that single insert, but If I disable autocommit
then all the inserts fail because of one error.
As a test I ran about 1000 identical inserts with autocommit on and also
with it off. I get roughly a 33% speed increase with the autocommit off, so
it's definitely a good thing. The problem is, to parse the insert
statements and ensure there are no duplicates I feel like I would be losing
the advantage that disabling autocommit gives me, and simply spending the
cpu cycles somewhere else.
Is there a way for me to say 'only commit the successful commands and ignore
the unsuccessful ones'? I know that's the point behind using this type of
transaction/rollback statement but I was curious if there was a way I could
fix it.
Matt
_________________________________________________________________
Express yourself instantly with MSN Messenger! Download today - it's FREE!
http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2004-08-11 18:26:14 | Re: 7.4.3 server panic |
Previous Message | Robert Treat | 2004-08-11 17:51:49 | Re: PostgreSQL 8.0 Feature List? |