From: | Steven Lembark <lembark(at)wrkhors(dot)com> |
---|---|
To: | pgsql-general(at)lists(dot)postgresql(dot)org |
Cc: | lembark(at)wrkhors(dot)com |
Subject: | Re: large numbers of inserts out of memory strategy |
Date: | 2017-12-01 17:49:42 |
Message-ID: | 20171201114942.177b394c@wrkhors.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Thu, 30 Nov 2017 08:43:32 -0600
Ted Toth <txtoth(at)gmail(dot)com> wrote:
> What is the downside of using a DO block? I'd have to do a nextval on
> each sequence before I could use currval, right? Or I could do 'select
> last_value from <sequence>'.
You are creating a piece of code that has to be parsed, tokenized,
and compiled prior to execution. What's biting you is that you've
created a function the size of your dataset.
If you like do-blocks then write a short block to insert one record
using placeholders and call it a few zillion times.
That or (in DBI-speak):
eval
{
$dbh->{ RaiseError } = 1;
$dbh->{ AutoCommit } = 0;
my $sth = $dbh->prepare
(
'insert into yourtable ( field field ) values ( $1, $2 )'
);
$sth->do( @$_ ) for @rows;
$dbh->commit
}
or die "Failed execution: $@";
which will be nearly as effecient in the long run.
That or just import the data from a csv/tsv (there are good
examples of data import available in the PG docs).
--
Steven Lembark 1505 National Ave
Workhorse Computing Rockford, IL 61103
lembark(at)wrkhors(dot)com +1 888 359 3508
From | Date | Subject | |
---|---|---|---|
Next Message | support-tiger | 2017-12-01 19:56:08 | pg data backup from vps |
Previous Message | Andreas Joseph Krogh | 2017-12-01 09:32:09 | Sv: Re: Re: Removing INNER JOINs |