From: | Marcin Giedz <marcin(dot)giedz(at)eulerhermes(dot)pl> |
---|---|
To: | pgsql-admin(at)postgresql(dot)org |
Subject: | Re: Performance problem... |
Date: | 2005-03-15 08:59:39 |
Message-ID: | 200503150959.39708.marcin.giedz@eulerhermes.pl |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
Dnia poniedziałek, 14 marca 2005 19:32, Scott Marlowe napisał:
> On Mon, 2005-03-14 at 12:03, Marcin Giedz wrote:
> > Hello...
> >
> >
> > Our company is going to change SQL engine from MySQL to PSQL. Of course
> > some performance problems occured. Our server is Dual Xeon 3.0GHz + 8GB
> > RAM + RAID1(software - two 146GB SCSI 15k) for sql data + RAID1(software
> > - two 146GB SCSI 15k) for pg_xlog. Postgres.conf parameters are as
> > follows:
> >
> > max_connections = 150
> > shared_buffers = 50000 # min 16, at least max_connections*2, 8KB
> > each work_mem = 2048 # min 64, size in KB
>
> 50,000 shared buffers may or may not be too much. Try it at different
> sizes from 5,000 or so up to 50,000 and find the "knee". It's usually
> closer to 10,000 than 50,000, but ymmv...
Playing with shared_buffers from 10000 to 50000 doesn't change anything in
total time for this query :( But when I change work_mem a little higher to
10000 total runtime decreases a little about 10% but when I change
random_page_cost to 0.2 (I know that almost all papers say it should be
higher then 1.0) total runtime decreases almost 3 times and lasts about 900ms
- earlier with random_page_cost=1.2 it took 2.7s. Is it possible to have
random_page_cost on this value?
>
> On the other hand, for a machine with 8 gigs of ram, 2 meg of work_mem
> is pretty small. Try bumping it up to 8 or 16 megs. You can change
> this one "on the fly" for testing, so just do:
>
> set work_mem=16384;
> and then run the query again and see if that helps. The hash aggregate
> method uses sort/work mem to do it's work, and if it doesn't think it
> can hold the result set in that space the planner will pick another
> method, like the merge left join.
>
> In your explain analyze output, look for gross mismatches between
> estimated and actual rows. Most of yours here look pretty good in the
> areas where the data is being collected, but during the merges, the
> numbers are WAY off, but i'm not sure what to do to change that.
From | Date | Subject | |
---|---|---|---|
Next Message | Andrei Bintintan | 2005-03-15 10:50:17 | Pgadmin II works with postgre 7.4.7? |
Previous Message | Werner vd Merwe | 2005-03-15 06:33:28 | Re: Performance Question |