Re: [GENERAL] slow inserts and updates on large tables

From: Herouth Maoz <herouth(at)oumail(dot)openu(dot)ac(dot)il>
To: jim(at)reptiles(dot)org (Jim Mercer)
Cc: pgsql-general(at)postgreSQL(dot)org
Subject: Re: [GENERAL] slow inserts and updates on large tables
Date: 1999-02-17 14:34:41
Message-ID: l03110701b2f0813254e6@[147.233.159.109]
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

At 16:10 +0200 on 17/2/99, Jim Mercer wrote:

>
> > 3) Back to the issue of INSERTS - copies are faster. If you can transform
> > the data into tab-delimited format as required by COPY, you save a lot
> > of time on parsing, planning etc.
>
> this sorta defeats the purpose of putting the data in an SQL database. 8^)

You probably didn't understand me. If you convert it to tab delimited text
and then use COPY table_name FROM filename/stdin instead of INSERT, it will
be much faster, because you don't have to do the parsing and planning on
each line, but only on the whole copy.

I didn't tell you to use the data directly from those text files...

In fact, it doesn't require using text files at all, just reformatting your
program. If until now it did

- - - -

while (data_still_coming) {

sprintf( command, "INSERT INTO table1 VALUES( %s, %s, %s )",
item1, item2, item3 );

PQexec( con, command );
}

- - - -

Now you have to do instead

- - - -

PQexec( con, "COPY table1 FROM stdin" );

while (data_still_coming) {

sprintf( line, "%s\t%s\t%s\n" , item1, item2, item3 );
PQputline( con, line );

}

PQputline( con, ".\n" );
PQendcopy(con);

- - - -

It's simply a different formatting to your data insertion.

Herouth

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Jim Mercer 1999-02-17 14:47:26 Re: [GENERAL] slow inserts and updates on large tables
Previous Message Jim Mercer 1999-02-17 14:13:36 Re: [GENERAL] slow inserts and updates on large tables