From: | "Carlo Stonebanks" <stonec(dot)register(at)sympatico(dot)ca> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Subject: | default_statistics_target |
Date: | 2010-03-14 23:27:57 |
Message-ID: | hnjrcs$2trl$1@news.hub.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Hi people,
The whole topic of messing with stats makes my head spin but I am concerned
about some horridly performing queries that have had bad rows estimates and
others which always choose seq scans when indexes are available. Reading up
on how to improve planner estimates, I have seen references to
default_statistics_target being changed from the default of 10 to 100.
Our DB is large, with thousands of tables, but the core schema has about 100
tables and the typical row counts are in the millions of rows for the whole
table. We have been playing endless games with tuning this server - but with
all of the suggestions, I don't think the issue of changing
default_statistics_target has ever come up. Realizing that there is a
performance hit associated with ANALYZE, are there any other downsides to
increasing this value to 100, and is this a common setting for large DBs?
Thanks,
Carlo
From | Date | Subject | |
---|---|---|---|
Next Message | Greg Smith | 2010-03-15 08:20:21 | Updated benchmarking category on the wiki |
Previous Message | Tom Lane | 2010-03-14 20:21:51 | Re: pg_dump far too slow |