From: | "Scott Marlowe" <scott(dot)marlowe(at)gmail(dot)com> |
---|---|
To: | "Greg Smith" <gsmith(at)gregsmith(dot)com> |
Cc: | pgsql-performance(at)postgresql(dot)org, wsmith23_2001(at)yahoo(dot)com |
Subject: | Re: Checkpoint tuning on 8.2.4 |
Date: | 2008-06-06 17:22:27 |
Message-ID: | dcc563d10806061022i76e86e42s4195b55872571057@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Fri, Jun 6, 2008 at 12:30 AM, Greg Smith <gsmith(at)gregsmith(dot)com> wrote:
> vacuum_cost_delay = 750
> autovacuum = true
> autovacuum_naptime = 3600
> autovacuum_vacuum_threshold = 1000
> autovacuum_analyze_threshold = 500
> autovacuum_vacuum_scale_factor = 0.4
> autovacuum_analyze_scale_factor = 0.2
> autovacuum_vacuum_cost_delay = -1
> autovacuum_vacuum_cost_limit = -1
> max_fsm_pages = 5000000
> max_fsm_relations = 2000
These are terrible settings for a busy database. A cost delay
anything over 10 or 20 is usually WAY too big, and will make vacuums
take nearly forever. Naptime of 3600 is 1 hour, right? That's also
far too long to be napping between just checking to see if you should
run another vacuum.
I'd recommend:
vacuum_cost_delay = 20
autovacuum = true
autovacuum_naptime = 300 # 5 minutes.
Note that I'm used to 8.2 where such settings are in more easily
readable settings like 5min. So if 3600 is in some other unit, I
could be wrong here.
> Now, when I was on the phone about this system, I recall hearing that
> they've fallen into that ugly trap where they are forced to reload this
> database altogether regularly to get performance to stay at a reasonable
> level. That's usually a vacuum problem, and yet another reason to upgrade
> to 8.3 so you get the improved autovacuum there. Vacuum tuning isn't really
> my bag, and I'm out of time here tonight; anybody else want to make some
> suggestions on what might be changed here based on what I've shared about
> the system?
It may well be that their best option is to manually vacuum certain
tables more often (i.e. the ones that bloat). you can write a script
that connects, sets vacuum_cost_delay to something higher, like 20 or
30, and then run the vacuum by hand. Such a vacuum may need to be run
in an almost continuous loop if the update rate is high enough.
I agree with what you said earlier, the biggest mistake here is
running a db on a RAID-5 array.
From | Date | Subject | |
---|---|---|---|
Next Message | Scott Marlowe | 2008-06-08 06:09:19 | Re: Optimizing AGE() |
Previous Message | Mason Sharp | 2008-06-06 15:01:27 | Re: PgPool parallel query performance rules of thumb |