From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Casey Duncan <casey(at)pandora(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Expected accuracy of planner statistics |
Date: | 2006-09-29 03:51:01 |
Message-ID: | 9347.1159501861@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Casey Duncan <casey(at)pandora(dot)com> writes:
> I was also trying to figure out how big the sample really is. Does a
> stats target of 1000 mean 1000 rows sampled?
No. From memory, the sample size is 300 times the stats target (eg,
3000 rows sampled for the default target of 10). This is based on some
math that says that's enough for a high probability of getting good
histogram estimates. Unfortunately that math promises nothing about
n_distinct.
The information we've seen says that the only statistically reliable way
to arrive at an accurate n_distinct estimate is to examine most of the
table :-(. Which seems infeasible for extremely large tables, which is
exactly where the problem is worst. Marginal increases in the sample
size seem unlikely to help much ... as indeed your experiment shows.
We could also diddle the estimator equation to inflate the estimate.
I'm not sure whether such a cure would be worse than the disease, but
certainly the current code was not given to us on stone tablets.
IIRC I picked an equation out of the literature partially on the basis
of it being simple and fairly cheap to compute...
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2006-09-29 03:57:12 | Re: Stored procedure array limits |
Previous Message | Tom Lane | 2006-09-29 03:13:24 | Re: Row versions and indexes |