From: | Sam Varshavchik <mrsam(at)courier-mta(dot)com> |
---|---|
To: | |
Cc: | "pgsql-jdbc(at)postgresql(dot)org" <pgsql-jdbc(at)postgresql(dot)org> |
Subject: | Re: Fastest method to insert data. |
Date: | 2002-04-21 00:03:04 |
Message-ID: | Pine.LNX.4.44.0204201959300.2333-100000@ny.email-scan.com |
Views: | Whole Thread | Raw Message | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-jdbc |
On Sat, 20 Apr 2002, Dennis R. Gesker wrote:
> In the sequences of actually transferring the data I encountered some
> memory problems when using the .addBatch() methods. I probably should
> have expected this since the tables I'm seeking to draw data from are
> kind of large. I was thinking that using the .addBatch() would be a good
> approach since I could treat the whole batch as a transaction helping to
> ensure that I pulled everything I intended.
You'll need to issue executeBatch() every once in a while. My current
approach is to executeBatch() for every thousand rows. I've tried many
things, this is the one that proved to be the fastest.
But I think that this is still much slower than it needs to be. COPY
TABLE is still much faster.
> Now, if there were classes that could GULP whole tables from a database
> (PG., MS or otherwise) at one shot and recreate these tables in PG. this
> would be great!
>
> Sam: Is the direction in which you were thinking?
Yes. The Sybase SQL server has a complete separate set of APIs that are
designed to quickly upload a bunch of data to a table. Sybase's bulk-copy
API is very similar to addBatch(), executeBatch() except that there's no
SQL involved. You just specify the table, then start feeding it rows.
From | Date | Subject | |
---|---|---|---|
Next Message | Dave Cramer | 2002-04-21 04:24:04 | Re: Troubles using PreparedStatements |
Previous Message | Dennis R. Gesker | 2002-04-20 19:11:22 | Re: Fastest method to insert data. |