From: | Qingbo Zhou <zhouqb(at)gmail(dot)com> |
---|---|
To: | pgsql-admin(at)postgresql(dot)org |
Subject: | ANALYZE estimates very wrong reltuples |
Date: | 2013-02-20 09:13:34 |
Message-ID: | CAA-FGaLYCKZkP90=Kn6ZrY0CW5UX6Trr_6++VUFvP4eEFjNVbg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
Hi,
I have a table which has ~26M records in it. But pg_class shows ~35M in the
"reltuples" column. last_autovacuum is null, and last_autoanalyze was just
on yesterday.
I tried running vacuum analyze on it, but the analyze result shows:
INFO: "[table name]": scanned 30000 of 950145 pages, containing 805842
live rows and 11109 dead rows; 30000 rows in sample, 42266462 estimated
total rows
Even worse estimates. I know that it's doing an estimate, but it's too
wrong. I have another table for which pg_class estimates 3 times the actual
number of rows.
I'm using 9.1.3 and vacuum settings are:
# grep vacuum postgresql.conf
vacuum_cost_delay = 25ms # 0-100 milliseconds
#vacuum_cost_page_hit = 1 # 0-10000 credits
#vacuum_cost_page_miss = 10 # 0-10000 credits
#vacuum_cost_page_dirty = 20 # 0-10000 credits
#vacuum_cost_limit = 200 # 1-10000 credits
#vacuum_defer_cleanup_age = 0 # number of xacts by which cleanup is delayed
#autovacuum = on # Enable autovacuum subprocess? 'on'
log_autovacuum_min_duration = 60000 # -1 disables, 0 logs all actions and
autovacuum_max_workers = 6 # max number of autovacuum subprocesses
autovacuum_naptime = 15s # time between autovacuum runs
autovacuum_vacuum_threshold = 1000 # min number of row updates before
# vacuum
autovacuum_analyze_threshold = 100 # min number of row updates before
#autovacuum_vacuum_scale_factor = 0.2 # fraction of table size before vacuum
#autovacuum_analyze_scale_factor = 0.1 # fraction of table size before
analyze
#autovacuum_freeze_max_age = 200000000 # maximum XID age before forced
vacuum
autovacuum_vacuum_cost_delay = 10ms # default vacuum cost delay for
# autovacuum, in milliseconds;
# -1 means use vacuum_cost_delay
autovacuum_vacuum_cost_limit = 1000 # default vacuum cost limit for
# autovacuum, -1 means use
# vacuum_cost_limit
#vacuum_freeze_min_age = 50000000
#vacuum_freeze_table_age = 150000000
Host machine has 32GB memory. I set effective_cache_size to 16GB,
and shared_buffers to 8GB. We have other machines with exactly the same
data/configuration but didn't get into such situation.
Can you give me some advice?
Thanks a lot,
Qingbo
From | Date | Subject | |
---|---|---|---|
Next Message | Albe Laurenz | 2013-02-20 10:50:47 | Re: ANALYZE estimates very wrong reltuples |
Previous Message | Campbell, Lance | 2013-02-15 15:07:22 | Re: Notes or comments on columns in a table |