From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Jeroen Vermeulen <jtv(at)xs4all(dot)nl>, Greg Stark <gsstark(at)mit(dot)edu>, Bart Samwel <bart(at)samwel(dot)tk>, Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Avoiding bad prepared-statement plans. |
Date: | 2010-02-26 04:25:46 |
Message-ID: | 7882.1267158346@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Robert Haas <robertmhaas(at)gmail(dot)com> writes:
> On Thu, Feb 25, 2010 at 10:40 PM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>> I still like the idea of automatically replanning with the known
>> parameter values, and noting whether the result plan was estimated to be
>> noticeably cheaper than the generic plan, and giving up on generating
>> custom plans if we didn't observe any such win over N tries.
> Isn't part of the problem here precisely that the cost estimates for
> the generic plan might not be too accurate?
No, the estimates for the generic plan are typically fine *in
themselves*; they only look bad when you compare them to what you can do
with knowledge of specific parameter values. An example is that the
default selectivity estimate for a range query (WHERE x > something AND
x < somethingelse) is 0.005. In a large number of real cases, the
actual selectivity is way smaller, and you can determine that if you
know the actual comparison constants. But it's tough to argue for
decreasing the default guess --- it's already small enough that you
could get screwed badly in the other direction if you queried a wide
range.
There may be some cases where the generic plan is wrongly estimated to
be cheaper than a custom plan that's actually better, but I haven't seen
many. If that were happening a lot then people would be reporting that
the advice to force a replan via EXECUTE or whatever doesn't help.
I don't think that there is any body of evidence at all that would
justify undertaking extremely expensive development of an extremely
painful-to-use feature to deal with that type of case.
> Also, there's no guarantee that the distribution of values
> tried will be random - there's the case where non-MCVs are tried for
> the first N times and then a non-MCV is tried on try N+1.
Sure, there are always going to be cases where you lose. Pushing the
responsibility onto the user doesn't really fix that though. It's not
apparent to me that users are in that much better position than we are
to determine when a custom plan is helpful.
BTW, if it wasn't clear, I would be in favor of allowing the cutoff N to
be adjustable, as well as the cost ratio that's considered to constitute
a win. So there would be some wiggle room to deal with that type of
situation.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Alex Hunsaker | 2010-02-26 04:28:14 | Re: Avoiding bad prepared-statement plans. |
Previous Message | Gokulakannan Somasundaram | 2010-02-26 04:24:08 | Re: A thought on Index Organized Tables |