From: | Andrea Aime <andrea(dot)aime(at)aliceposta(dot)it> |
---|---|
To: | Alan Stange <stange(at)rentec(dot)com> |
Cc: | pg(at)fastcrypt(dot)com, PostgreSQL JDBC Mailing List <pgsql-jdbc(at)postgresql(dot)org> |
Subject: | Re: V3 protocol, batch statements and binary transfer |
Date: | 2004-03-31 08:05:50 |
Message-ID: | 406A7BDE.3020502@aliceposta.it |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-jdbc |
Alan Stange wrote:
> Hello all,
>
> We have the same performance problems with bulk data inserts from jdbc
> as well. We used batches as well but made sure that each statement in
> the batch was large ~128KB and inserted on many rows at a time. This
> cut down on the number of round trips to to the postgresql server.
Yes, I also did it but putting togheter many inserts into a single statement
and in fact it halved the time required to perform the inserts, still, it
takes too much time anyway: 1 minute for insertion and 5 seconds to read the
data...
> In addition to a) and b) below, I'd add that the read size off the
> sockets is too small. It's a few KB currently and this should
> definitely be bumped up to a larger number.
In fact I've tried to bump up the 8kb value that's hardwired in the code
to 16,64,128Kb but saw no improvement on a 100Mb full switched LAN...
> We're running on a gigE network and see about 50MB/s data rates coming
> off the server (using a 2GB shared memory region). This sounds nice,
> but one has to keep in mind that the data is binary encoded in text.
>
> Anyway, count me in to work on the jdbc client as well (in my limited
> time). To start, I have a couple of local performance hacks for which
> I should submit proper patches.
>
I'm eager to have a look at them :-)
Best regards
Andrea Aime
From | Date | Subject | |
---|---|---|---|
Next Message | Guido Fiala | 2004-03-31 08:51:36 | Re: OutOfMemory |
Previous Message | Oliver Jowett | 2004-03-30 23:07:47 | Re: JDBC driver's (non-)handling of InputStream:s |