From: | Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com> |
---|---|
To: | Ernesto Quiñones <ernestoq(at)gmail(dot)com> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Question about VACUUM |
Date: | 2011-12-05 17:42:56 |
Message-ID: | CAOR=d=0ZYjdiHifBEXDsXy7+964+ikK41rfQjnARCV1SV6bz=Q@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Mon, Dec 5, 2011 at 10:19 AM, Ernesto Quiñones <ernestoq(at)gmail(dot)com> wrote:
> Hi Kevin, comments after your comments
>
> 2011/12/3 Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov>:
>> Ernesto Quiñones wrote:
>>> Scott Marlowe wrote:
>>>> Ernesto Quiñones wrote:
>>
>>>>> I want to know if it's possible to predict (calculate), how long
>>>>> a VACUUM FULL process will consume in a table?
>>
>> I don't think you said what version of PostgreSQL you're using.
>> VACUUM FULL prior to version 9.0 is not recommended for most
>> situations, and can take days or weeks to complete where other
>> methods of achieving the same end may take hours. If you have
>> autovacuum properly configured, you will probably never need to run
>> VACUUM FULL.
>
> I'm working with PostgreSQL 8.3 running in Solaris 10, my autovacuum
> paramaters are:
>
> autovacuum on
> autovacuum_analyze_scale_factor 0,5
> autovacuum_analyze_threshold50000
> autovacuum_freeze_max_age 200000000
> autovacuum_max_workers 3
> autovacuum_naptime 1h
> autovacuum_vacuum_cost_delay -1
> autovacuum_vacuum_cost_limit -1
> autovacuum_vacuum_scale_factor 0,5
> autovacuum_vacuum_threshold 50000
>
> my vacuums parameters are:
>
> vacuum_cost_delay 1s
> vacuum_cost_limit 200
Those are insane settings for vacuum costing, even on a very slow
machine. Basically you're starving vacuum and autovacuum so much that
they can never keep up.
> I have a good performance in my hard disks, I have a good amount of
> memory, but my cores are very poor, only 1ghz each one.
If so then your settings for vacuum costing are doubly bad.
I'd start by setting the cost_delay to 1ms and raising your cost limit
by a factor of 10 or more.
> I have some questions here:
>
> 1. autovacuum_max_workers= 3 , each work processes is using only one
> "core" or one "core" it's sharing por 3 workers?
Each worker uses a single process and can use one core basically.
Right now your vacuum costing is such that it's using 1/100000th or so
of a CPU.
> 2. when I run a "explain analyze" in a very big table (30millons of
> rows) , explain returning me 32 millons of rows moved, I am assuming
> that my statistics are not updated in 2 millons of rows, but, is it a
> very important number? or maybe, it's a regular result.
Look for projections being off by factors of 10 or more before it
starts to make a big difference. 32M versus 30M is no big deal. 30k
versus 30M is a big deal.
From | Date | Subject | |
---|---|---|---|
Next Message | Scott Marlowe | 2011-12-05 17:44:07 | Re: Question about VACUUM |
Previous Message | Tory M Blue | 2011-12-05 17:21:19 | Re: pg_upgrade |