From: | jco(at)cornelius-olsen(dot)dk |
---|---|
To: | Doug Fields <dfields-pg-general(at)pexicom(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Batch Inserts |
Date: | 2002-12-12 00:55:14 |
Message-ID: | OF82825A9E.E13F3CD8-ONC1256C8D.0004C8E7@dk |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hi Doug,
The latter is the case. Only one transaction is done because transactions
cannot be nested and so when you use explicit begin-commit, no autocommit
is done.
/Jørn Cornelius Olsen
Doug Fields <dfields-pg-general(at)pexicom(dot)com>
Sent by: pgsql-general-owner(at)postgresql(dot)org
12-12-2002 00:03
To: "Ricardo Ryoiti S. Junior" <suga(at)netbsd(dot)com(dot)br>
cc: pgsql-general(at)postgresql(dot)org, pgsql-jdbc(at)postgresql(dot)org
Subject: Re: [GENERAL] Batch Inserts
Hi Ricardo, list,
One quick question:
> - If your "data importing" is done via inserts, make sure that
the
>batch uses transactions for each (at least or so) 200 inserts. If you
>don't, each insert will be a transaction, what will slow down you.
I use JDBC and use it with the default "AUTOCOMMIT ON."
Does doing a statement, in one JDBC execution, of the form:
BEGIN WORK; INSERT ... ; INSERT ... ; INSERT ...; COMMIT;
Count as N individual inserts (due to the autocommit setting) or does the
BEGIN WORK;...COMMIT; surrounding it override that setting?
Thanks,
Doug
---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?
From | Date | Subject | |
---|---|---|---|
Next Message | Medi Montaseri | 2002-12-12 01:16:04 | PQexec and timeouts |
Previous Message | Bruce Momjian | 2002-12-12 00:45:08 | Re: Potentially serious migration issue from 7.1.3 to 7.2 |