From: | Greg Stark <gsstark(at)mit(dot)edu> |
---|---|
To: | Mary Edie Meredith <maryedie(at)osdl(dot)org> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, pgsql-performance <pgsql-performance(at)postgresql(dot)org>, osdldbt-general <osdldbt-general(at)lists(dot)sourceforge(dot)net> |
Subject: | Re: [GENERAL] how to get accurate values in pg_statistic |
Date: | 2003-09-07 23:18:01 |
Message-ID: | 87fzj86vdi.fsf@stark.dyndns.tv |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Mary Edie Meredith <maryedie(at)osdl(dot)org> writes:
> We ran additional tests with default_statistics_target set to 1000 (the
> max I believe). The plans are the same over the different runs, but the
> pg_statistics table has different cost values. The performance results
> of the runs are consistent (we would expect this with the same plans).
> The resulting performance metrics are similar to the best plans we see
> using the default histogram size (good news).
Hm, would it be possible to do a binary search and find the target at which
you start getting consistent plans? Perhaps the default of 10 is simply way
too small and should be raised?
Obviously this would depend on the data model, but I suspect if your aim is
for the benchmark data to be representative of typical data models, which
scares me into thinking perhaps users are seeing similarly unpredictably
variable performance.
--
greg
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2003-09-08 00:32:26 | Re: [GENERAL] how to get accurate values in pg_statistic |
Previous Message | Pailloncy Jean-Gérard | 2003-09-07 18:04:20 | slow plan for min/max |