From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Jeroen Vermeulen <jtv(at)xs4all(dot)nl>, Greg Stark <gsstark(at)mit(dot)edu>, Bart Samwel <bart(at)samwel(dot)tk>, Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Avoiding bad prepared-statement plans. |
Date: | 2010-02-26 04:01:14 |
Message-ID: | 603c8f071002252001n6b2467bal2e6bae26a9a2b79a@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Thu, Feb 25, 2010 at 10:40 PM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> Robert Haas <robertmhaas(at)gmail(dot)com> writes:
>> I actually think there isn't any clean line. Obscene is in the eye of
>> the beholder. Frankly, I think this discussion is getting off into
>> the weeds. It would be nice, perhaps, to have a feature that will
>> detect when the generic plan is the suxxor and attempt to find a
>> better one, but that's really, really hard for a whole bunch of
>> reasons. Bruce's suggestion that we should provide some user control
>> over whether we plan at bind time or execute time seems likely to be
>> (1) considerably simpler to implement, (2) considerably easier to get
>> consensus on, and (3) capable of giving 90% of the benefit for an only
>> higher inconvenience factor.
>
> It's not going to be easier to implement. Yeah, it would be easy to
> provide a global switch via a GUC setting, but that's not going to be
> helpful, because this is the sort of thing that really needs to be
> managed per-query. Almost any nontrivial application is going to have
> some queries that really need the custom plan and many that don't.
> If people just turn the GUC on we might as well throw away the plan
> caching mechanism altogether.
I agree. A GUC is a really bad idea.
> But putting support for a per-query level
> of control into the protocol (and then every client library) as well as
> every PL is going to be painful to implement, and even more painful to
> use.
I suppose I should have learned by now not to argue with you over
technical points, but I don't see why this should be painful. I mean,
it'll be a lot of work and it'll in the end touch a lot of different
parts of the code, but work != pain, and I don't see any reason why
the problem can't be attacked incrementally. I'm also deeply
unconvinced that any other solution will be as satisfactory.
> I still like the idea of automatically replanning with the known
> parameter values, and noting whether the result plan was estimated to be
> noticeably cheaper than the generic plan, and giving up on generating
> custom plans if we didn't observe any such win over N tries.
Isn't part of the problem here precisely that the cost estimates for
the generic plan might not be too accurate? The only instances of
this problem I've run across are the ones where MCVs need a different
treatment, and the problem isn't necessarily that the new estimate is
cheaper so much as that the old estimate isn't going to turn out as
predicted. Also, there's no guarantee that the distribution of values
tried will be random - there's the case where non-MCVs are tried for
the first N times and then a non-MCV is tried on try N+1.
...Robert
From | Date | Subject | |
---|---|---|---|
Next Message | Gokulakannan Somasundaram | 2010-02-26 04:15:00 | Re: A thought on Index Organized Tables |
Previous Message | Jaime Casanova | 2010-02-26 03:50:35 | Re: Allow vacuumdb to only analyze |