From: | "Shridhar Daithankar" <shridhar_daithankar(at)persistent(dot)co(dot)in> |
---|---|
To: | Pgsql-hackers(at)postgresql(dot)org |
Subject: | Improving speed of copy |
Date: | 2002-09-20 15:52:08 |
Message-ID: | 3D8B9180.32413.19C49E9E@localhost |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Hi all,
While testing for large databases, I am trying to load 12.5M rows of data from
a text file and it takes lot longer than mysql even with copy.
Mysql takes 221 sec. v/s 1121 sec. for postgres. For postgresql, that is around
11.5K rows per second. Each tuple has 23 fields with fixed length of around 100
bytes
I wrote a programs which does inserts in batches but none of thme reaches
performance of copy. I tried 1K/5K/10K/100K rows in a transaction but it can
not cross 2.5K rows/sec.
The machine is 800MHz, P-III/512MB/IDE disk. Postmaster is started with 30K
buffers i.e. around 235MB buffers. Kernel caching paramaters are defaults.
Besides there is issue of space. Mysql takes 1.4GB space for 1.2GB text data
and postgresql takes 3.2GB of space. Even with 40 bytes per row overhead
mentioned in FAQ, that should come to around 1.7GB, counting for 40% increase
in size. Vacuum was run on database.
Any further help? Especially if batch inserts could be speed up, that would be
great..
Bye
Shridhar
--
Alone, adj.: In bad company. -- Ambrose Bierce, "The Devil's Dictionary"
From | Date | Subject | |
---|---|---|---|
Next Message | Justin Clift | 2002-09-20 15:52:18 | Re: Having no luck with getting pgbench to run multipletimes |
Previous Message | Tom Lane | 2002-09-20 15:50:17 | Re: Win32 rename()/unlink() questionst |