From: | Mark Kirkwood <markir(at)i4free(dot)co(dot)nz> |
---|---|
To: | Richard Huxton <dev(at)archonet(dot)com>, edipoelder(at)ig(dot)com(dot)br |
Cc: | pgsql-sql(at)postgresql(dot)org |
Subject: | Memory And Performance |
Date: | 2001-04-07 03:31:04 |
Message-ID: | 01040715310400.00651@spikey.slithery.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-sql |
> > The system that I'm developing, I have about 25000 (persons) x 8
>>(exams)
>> x 15 (answers per exam) = 3000000 records to process and it is VERY SLOW.
>
>f you need to import large quantities of data, look at the copy
>command, that tends to be faster.
By way of example for the level of improvement COPY gives:
a 3000000 row table ( 350Mb dump file -> 450Mb table ) can by loaded via copy
in 7 minutes. To insert each row (say using a perl prog to read the file and
DBD-Pg to insert, committing every 10000 rows ) takes about 75minutes. I used
a PII 266Mhz/192Mb and Postgresql 7.1b5 for these results. Postgresql 7.0.2
is slower ( 20-30% or so...), but should still display a similar level of
improvement with copy.
Good loading
Mark
From | Date | Subject | |
---|---|---|---|
Next Message | Kyle | 2001-04-07 18:37:32 | Update |
Previous Message | Tom Lane | 2001-04-07 00:03:38 | Re: "row_count" reserved? |