| From: | Frank Wiles <frank(at)wiles(dot)org> |
|---|---|
| To: | "Rahul_Iyer" <rahul_iyer(at)persistent(dot)co(dot)in> |
| Cc: | pgsql-hackers(at)postgresql(dot)org |
| Subject: | Re: Speeding up operations |
| Date: | 2003-08-13 19:47:12 |
| Message-ID: | 20030813144712.6b4cfeca.frank@wiles.org |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-hackers |
On Wed, 13 Aug 2003 10:53:39 +0530
"Rahul_Iyer" <rahul_iyer(at)persistent(dot)co(dot)in> wrote:
> hi...
> im on a project using Postgres. The project involves, at times, upto
> 5,000,000 inserts. I was checking the performance of Postgres for 5M
> inserts into a 2 column table (one col=integer, 2nd col=character). I
> used the Prepare... and execute method, so i basically had 5M execute
> statements and 1 prepare statement. Postgres took 144min for this...
> is there any way to improve this performance? if so, how? btw, im
> using it on a SPARC/Solaris 2.6.
> thanx in adv
> rahul
>
> P.S: Kindly point me towards any relevant documentation as well.
If this is a one time insert you'll want to remove any indexes
and rebuild them after the inserts are done.
Also you'll want to look into the COPY command here:
http://www.postgresql.org/docs/7.3/static/sql-copy.html
Loading the data from file like this is probably going to be much
faster than from a script and/or program.
---------------------------------
Frank Wiles <frank(at)wiles(dot)org>
http://frank.wiles.org
---------------------------------
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Rod Taylor | 2003-08-13 19:47:56 | Re: Speeding up operations |
| Previous Message | Tom Lane | 2003-08-13 19:27:30 | Re: 7.4 beta 1: SET log_statement=false |