From: | Josh Berkus <josh(at)agliodbs(dot)com> |
---|---|
To: | Christopher Browne <cbbrowne(at)acm(dot)org> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Recommendations for set statistics |
Date: | 2005-05-13 16:22:11 |
Message-ID: | 200505130922.11582.josh@agliodbs.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Chris,
> It is widely believed that a somewhat larger default than 10 would be
> a "good thing," as it seems to be fairly common for 10 to be too small
> to allow statistics to be stable. But nobody has done any formal
> evaluation as to whether it would make sense to jump from 10 to:
>
> - 15?
> - 20?
> - 50?
> - 100?
> - More than that?
My anecdotal experience is that if more than 10 is required, you generally
need to jump to at least 100, and more often 250. On the other end, I've
generally not found any difference between 400 and 1000 when it comes to
"bad" queries.
I have an unfinished patch in the works which goes through and increases the
stats_target for all *indexed* columns to 100 or so. However, I've needed
to work up a test case to prove the utility of it.
--
Josh Berkus
Aglio Database Solutions
San Francisco
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2005-05-13 16:42:44 | Re: PostgreSQL strugling during high load |
Previous Message | Steinar H. Gunderson | 2005-05-13 16:12:14 | Re: PostgreSQL strugling during high load |