From: | Hannu Krosing <hannu(at)tm(dot)ee> |
---|---|
To: | Rahul_Iyer <rahul_iyer(at)persistent(dot)co(dot)in> |
Cc: | "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Speeding up operations |
Date: | 2003-08-13 22:42:44 |
Message-ID: | 1060814563.4549.4.camel@fuji.krosing.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Rahul_Iyer kirjutas K, 13.08.2003 kell 08:23:
> hi...
> im on a project using Postgres. The project involves, at times, upto
> 5,000,000 inserts. I was checking the performance of Postgres for 5M inserts
> into a 2 column table (one col=integer, 2nd col=character). I used the
> Prepare... and execute method, so i basically had 5M execute statements and
> 1 prepare statement. Postgres took 144min for this... is there any way to
> improve this performance? if so, how? btw, im using it on a SPARC/Solaris
> 2.6.
If you are inserting into an empty table with primary key (or other
constraints), you can run ANALYZE on that table in 1-2 minutes after you
have started the INSERTs, so that constraint-checking logic will do the
right thing (use inedex for pk).
in my tests I achieved about 9000 inserts/sec by using multiple
inserting frontends and ~100 inserts per transaction (no indexes, 6
columns, 4 processors, 2GB memory, test clients running on same
computer)
--------------
Hannu
From | Date | Subject | |
---|---|---|---|
Next Message | Bertrand Petit | 2003-08-13 23:05:29 | Re: [BUGS] 7.4 beta 1: SET log_statement=false |
Previous Message | Andrew Dunstan | 2003-08-13 22:30:14 | error making man docs |