From: | "Scott Marlowe" <scott(dot)marlowe(at)gmail(dot)com> |
---|---|
To: | "Gauthier, Dave" <dave(dot)gauthier(at)intel(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Array load from remote site through Perl/DBI |
Date: | 2008-03-11 23:27:47 |
Message-ID: | dcc563d10803111627v1bfe4522kddf0ce110edd58a9@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Tue, Mar 11, 2008 at 1:09 PM, Gauthier, Dave <dave(dot)gauthier(at)intel(dot)com> wrote:
>
> I have a perl/dbi app that loads my DB with sequential and discrete insert
> statements. Runs very fast and I'm satisfied with it. Now I have to run
> the same app from a different site, but loading my local DB. The "one at a
> time" inserts take too long, probably because of the client/server delays
> incurred with the remote DB attach.
You can either wrap all the inserts in a begin; end; pair or you can
use copy from stdin. pg_dump uses copy from stdin for an example.
> I was thinking of pooling all the data for the insert into arrays and then
> doing a single array insert, therby cutting down on all the back/forth. But
> there may be other approaches.
Hmmm. I think as long as you're moving things in a transaction you'll
probably be ok.
>
>
>
> Again, Perl/DBI, remote attach, Running v8.2.0 on Linux
>
>
>
> Thanks
>
> -dave
From | Date | Subject | |
---|---|---|---|
Next Message | Scott Marlowe | 2008-03-11 23:28:40 | Re: Array load from remote site through Perl/DBI |
Previous Message | Greg Smith | 2008-03-11 23:22:58 | Re: postgre vs MySQL |