From: | jguthrie(at)air(dot)org |
---|---|
To: | pgsql-jdbc(at)postgresql(dot)org |
Subject: | performance issues |
Date: | 2002-12-05 16:44:56 |
Message-ID: | 20021205164456.400813CCE@guthrie.charm.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-jdbc |
i (and co-workers) am trying to get the best performance out of jdbc
inserting into a specific table (the rest of my app can suffer if it
improves performance here). a couple of issues i have come across
though, where i would welcome advice:
1) batches - we are thinking that we can save time by batching up,
say, 10 inserts and using Statement.executeBatch(). the problem here
is that executeBatch wants a nice sql statement String, whereas we are
currently using PreparedStatements. and we are using
PreparedStatements because we are inserting binary data (using
setByteArray() to load a pg bytea field). so is there a way to:
- use PreparedStatements in a batch environment?
- insert binary data in a String format?
2) indexes - the table has a primary key, so it has an index. one
bright idea i got was to drop the primary key, thus saving time by
not doing all the index work. surprisingly, this seems to have no
effect at all. does this make sense?
thanks. again, all ideas welcome.
john guthrie
From | Date | Subject | |
---|---|---|---|
Next Message | Dave Cramer | 2002-12-05 16:54:40 | Re: performance issues |
Previous Message | Dave Cramer | 2002-12-05 01:12:39 | Re: Bug in DatabaseMetaData.getColumns(...)?? Patch applied |