Re: Thousands INSERT going slow ...

From: Doug McNaught <doug(at)mcnaught(dot)org>
To: Hervé Piedvache <herve(at)elma(dot)fr>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: Thousands INSERT going slow ...
Date: 2003-03-22 17:36:09
Message-ID: m3wuir483a.fsf@varsoon.wireboard.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

=?iso-8859-15?q?Herv=E9=20Piedvache?= <herve(at)elma(dot)fr> writes:

> Hi,
>
> I'm just testing insertion of about 600 000 records inside 3 tables.
>
> Just making a big text file with 3 inserts each time (for my 3 tables) like
> insert into xx (yy) values ('data'); so I have 3 x 600 000 inserts inside the
> file.
>
> Table N2 have a reference on the Table N1 with the primary key ...
> It's not a transaction ... I have only a primary key on each 3 tables ...

You're getting killed by transaction overhead. Make it so you do 1000
or so (or even more, big transactions don't hurt) inserts in a single
transaction and you'll see much better performance.

Also (under 7.3) run VACUUM during the insertion process--it may help.

You could also disable the foreign key triggers during the run if
you're sure the data is consistent.

-Doug

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Doug McNaught 2003-03-22 17:39:20 Re: Point in time recovery?
Previous Message Tom Lane 2003-03-22 17:33:54 Re: Thousands INSERT going slow ...