>>> Gregory Stark <stark(at)enterprisedb(dot)com> wrote:
> That sounds like it would be an interesting query to analyze in more
detail.
> Is there any chance to could run the complete graph and get a chart
of
> analyze
> times for all statistics values from 1..1000 ? And log the explain
plans to
> a
> file so we can look for at what statistics targets the plan changed?
>
> Or if the data is public I would be interested in looking at doing it
if you
> want to send it to me.
There are some very big tables in that query which contain some
confidential data. It would be hard do a lot of runs at high
default_statistics_target values because the database analyze time
goes so high. If I pick out which tables are used by the query, I
might be able to put a script together which loops through analyze of
those tables with different targets and capturing run time.
There are two problems -- finding the time to set this up, and finding
server time windows where other things wouldn't be distorting the
results. If you could help with setting up the test script, that
would go a long way toward solving the first problem. I think I could
reserve a smaller multi-CPU machine with identical data but slower
CPUs to run the test.
I'll attach the query and plan. You'll note that the query looks a
little odd, especially all the (1=1) tests. This is because the
application allows users to plug in a variety of selection criteria,
and any that aren't used are stubbed out that way. I picked one that
was not too atypical for the 300,000 runs per day. I intentionally
didn't "clean up" the white space, but left it just as it was emitted
by our framework, in case any of that affected parse/plan time.
If you need schema info not obvious from the plan, let me know.
-Kevin