From: | Doug McNaught <doug(at)wireboard(dot)com> |
---|---|
To: | holger(at)marzen(dot)de |
Cc: | Martijn van Oosterhout <kleptog(at)svana(dot)org>, "Samuel J(dot) Sutjiono" <ssutjiono(at)wc-group(dot)com>, <pgsql-general(at)postgresql(dot)org>, <pgsql-sql(at)postgresql(dot)org> |
Subject: | Re: Performance issues with compaq server |
Date: | 2002-05-08 15:02:35 |
Message-ID: | m3offqk7hw.fsf@varsoon.wireboard.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general pgsql-sql |
Holger Marzen <holger(at)marzen(dot)de> writes:
> ACK. On a given hardware I get about 150 inserts per second. Using a
> begin/end transaction for a group of 100 inserts speeds it up to about
> 450 inserts per second.
COPY is even faster as there is less query parsing to be done, plus
you get a transaction per COPY statement even without BEGIN/END.
> But beware: if one insert fails (duplicate key, faulty data) then you
> have to re-insert the remaining rows as single transactions, else all
> rows of the previous transaction are discarded.
Hmm don't you have to ROLLBACK and redo the whole transaction without
the offending row(s), since you can't commit while in ABORT state? Or
am I misunderstanding?
-Doug
From | Date | Subject | |
---|---|---|---|
Next Message | Nigel J. Andrews | 2002-05-08 15:02:59 | Potential problem reporting |
Previous Message | Darko Prenosil | 2002-05-08 14:54:22 | Fw: C trigger |
From | Date | Subject | |
---|---|---|---|
Next Message | Charles Hauser | 2002-05-08 15:16:51 | CURSOR/FETCH vs LIMIT/OFFSET |
Previous Message | Holger Marzen | 2002-05-08 07:05:50 | Re: Performance issues with compaq server |