From: | Jim Nasby <jim(at)nasby(dot)net> |
---|---|
To: | Ed L(dot) <pgsql(at)bluepolka(dot)net> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: autovac hung/blocked |
Date: | 2006-11-15 15:31:55 |
Message-ID: | F76810C5-D166-4A51-89C0-D15930BEE5CE@nasby.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Nov 14, 2006, at 1:02 PM, Ed L. wrote:
> On Tuesday November 14 2006 12:56 pm, Jim C. Nasby wrote:
>> You don't have the vacuum cost delay settings set unreasonably
>> high, do you?
> On Tuesday November 14 2006 12:56 pm, you wrote:
>> You don't have the vacuum cost delay settings set unreasonably
>> high, do you?
>
> I'm not sure. Here's what we're running:
>
> #vacuum_cost_delay = 0 # 0-1000 milliseconds
> #vacuum_cost_page_hit = 1 # 0-10000 credits
> #vacuum_cost_page_miss = 10 # 0-10000 credits
> #vacuum_cost_page_dirty = 20 # 0-10000 credits
> #vacuum_cost_limit = 200 # 0-10000 credits
> autovacuum = on # enable autovacuum
> subprocess?
> #autovacuum_naptime = 60 # time between autovacuum
> runs, in secs
> autovacuum_naptime = 600 # changed by CW 9/11/06 to
> minimize interference with application
> autovacuum_vacuum_threshold = 1000 # min # of tuple updates
> before
> # vacuum
> autovacuum_analyze_threshold = 500 # min # of tuple updates
> before
> autovacuum_vacuum_scale_factor = 0.4 # fraction of rel size before
> # vacuum
> autovacuum_analyze_scale_factor = 0.2 # fraction of rel size before
> autovacuum_vacuum_cost_delay = 500 # default vacuum cost delay
> for
> # vacuum_cost_delay
> autovacuum_vacuum_cost_limit = 200 # default vacuum cost limit
> for
> # vacuum_cost_limit
>
> The table on which it seems to be struggling is updated
> many times per second and has 7.2M rows over 15GB total.
>
> I'm trying to figure out how often it would be napping
> on that...? Maybe we're just seeing a very conservative
> vacuum on a large table...?
Well, it's going to take some time to vacuum 15G, but it appears
there's also confusion about what some of these settings do.
_naptime simply controls how often autovacuum checks to see if
there's work to do. There's really no reason at all to set it to 10
minutes.
If you're worried about the impact vacuum has on your application,
you want to be using the *_cost_* settings. Those work by
incrementing a cost counter for every page that vacuum touches
(vacuum_cost_page*). Once that counter hits vacuum_cost_limit, vacuum
will stop what it's doing and sleep for vacuum_cost_delay milliseconds.
So in broad terms, if you want to reduce the impact of vacuum/
autovacuum, you should increase vacuum_cost_delay/
autovacuum_vacuum_cost_delay to something (20 ms seems to be a decent
starting point). If you monitor disk activity, you can adjust things
so that you're at 90% or so of your drive capability with vacuum and
a normal workload running. Depending on how much of your data is
normally in cache, you might want to increase both page_dirty and the
cost_limit.
--
Jim Nasby jim(at)nasby(dot)net
EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)
From | Date | Subject | |
---|---|---|---|
Next Message | Gurjeet Singh | 2006-11-15 15:32:33 | Re: Question about query optimization |
Previous Message | Matthias.Pitzl | 2006-11-15 15:26:16 | Re: Question about query optimization |