From: | Curt Sampson <cjs(at)cynic(dot)net> |
---|---|
To: | Shridhar Daithankar <shridhar_daithankar(at)persistent(dot)co(dot)in> |
Cc: | Pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Improving speed of copy |
Date: | 2002-10-06 15:06:11 |
Message-ID: | Pine.NEB.4.44.0210070002510.515-100000@angelic.cynic.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Fri, 20 Sep 2002, Shridhar Daithankar wrote:
> On 20 Sep 2002 at 21:22, Shridhar Daithankar wrote:
>
> > Mysql takes 221 sec. v/s 1121 sec. for postgres. For postgresql,
> > that is around 11.5K rows per second. Each tuple has 23 fields with
> > fixed length of around 100 bytes
Yes, postgres is much slower than MySQL for doing bulk loading of data.
There's not much, short of hacking on the code, that can be done about
this.
> One more issue is time taken for composite index creation. It's 4341
> sec as opposed to 436 sec for mysql. These are three non-unique
> character fields where the combination itself is not unique as well.
Setting sort_mem appropriately makes a big difference here. I generally
bump it up to 2-8 MB for everyone, and when I'm building a big index, I
set it to 32 MB or so just for that session.
But make sure you don't set it so high you drive your system into
swapping, or it will kill your performance. Remember also, that in
7.2.x, postgres will actually use almost three times the value you give
sort_mem (i.e., sort_mem of 32 MB will actually allocate close to 96 MB
of memory for the sort).
cjs
--
Curt Sampson <cjs(at)cynic(dot)net> +81 90 7737 2974 http://www.netbsd.org
Don't you know, in this new Dark Age, we're all light. --XTC
From | Date | Subject | |
---|---|---|---|
Next Message | Christoph Strebin | 2002-10-06 15:24:32 | Case insensitive columns |
Previous Message | Curtis Faith | 2002-10-06 09:17:59 | Parallel Executors [was RE: Threaded Sorting] |