>>> Gregory Stark <stark(at)enterprisedb(dot)com> wrote:
> Incidentally this timing is with the 75kB toasted arrays in shared
buffers
> because the table has just been analyzed. If it was on a busy system
then
> just
> planning the query could involve 75kB of I/O which is what I believe
was
> happening to me way back when I last observed super-long plan times.
I'm pretty sure I saw that in some of my tests with larger targets.
With a large database and a large target, some of the tables'
statistics apparently weren't still cached the first time I planned
the query, and I got an extremely long plan time on the first attempt,
and then it settled in within a pretty narrow range on repeated plans.
I discarded the initial plan time as immaterial for our purposes
because a query that's run 300,000 times per day is probably going to
keep its statistics in cache most of the time.
I was looking at trying to modify the perl script from Robert Haas to
run my query at a wide range of target values, collecting analyze and
plan times at each. Now that you have an easy-to-create synthetic
example, is this still worth it, considering that it would be on 8.2?
(If we wait a week or two, I could probably do it on 8.3.) We do have
gprof on these systems, although I'd need advice on how to use it.
-Kevin