From: | Harry Broomhall <harry(dot)broomhall(at)uk(dot)easynet(dot)net> |
---|---|
To: | shridhar_daithankar(at)myrealbox(dot)com (Shridhar Daithankar) |
Cc: | harry(dot)broomhall(at)uk(dot)easynet(dot)net, pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Performance weirdness with/without vacuum analyze |
Date: | 2003-10-21 14:50:48 |
Message-ID: | 200310211450.PAA15768@haeb.noc.uk.easynet.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Shridhar Daithankar writes:
First - many thanks for your suggestions and pointers to further info.
I have been trying some of them with some interesting results!
> Harry Broomhall wrote:
> > #effective_cache_size = 1000 # typically 8KB each
> > #random_page_cost = 4 # units are one sequential page fetch cost
>
> You must tune the first one at least. Try
> http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html to tune these
> parameters.
Changing effective_cache_size seemed to have very little effect. I took it
in steps up to 300MB (the machine has 640MB memory), and the differences
in speed were less than 10%.
[SNIP]
>
> What happens if you turn off hash joins?
This makes the non vacuum version about 40% slower, and the vacuum version
to the same speed (i.e. about 4X faster than it had been!).
> Also bump sort memory to something
> good.. around 16MB and see what difference does it make to performance..
This was interesting. Taking it to 10MB made a slight improvement. Up to
20MB and the vacuum case improved by 5X speed, but the non-vacuum version
slowed down. Putting it up to 40MB slowed both down again.
I will need to test with some of the other scripts and functions I have
written, but it looks as if selective use of more sort memory will be
useful.
Regards,
Harry.
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Treat | 2003-10-21 15:02:25 | Re: SRFs ... no performance penalty? |
Previous Message | Anjan Dave | 2003-10-21 14:28:13 | Tuning for mid-size server |