From: | Claudio Freire <klaussfreire(at)gmail(dot)com> |
---|---|
To: | Josh Berkus <josh(at)agliodbs(dot)com> |
Cc: | postgres performance list <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: Shouldn't we have a way to avoid "risky" plans? |
Date: | 2011-03-23 17:35:55 |
Message-ID: | AANLkTi=GAz7gFBCoXQeRDN3PUA0fXxRyP_=DxS4Y1tJU@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Wed, Mar 23, 2011 at 2:12 PM, Josh Berkus <josh(at)agliodbs(dot)com> wrote:
> Folks,
>
>...
> It really seems like we should be able to detect an obvious high-risk
> situation like this one. Or maybe we're just being too optimistic about
> discarding subplans?
Why not letting the GEQO learn from past mistakes?
If somehow a post-mortem analysis of queries can be done and accounted
for, then these kinds of mistakes would be a one-time occurrence.
Ideas:
* you estimate cost IFF there's no past experience.
* if rowcount estimates miss by much, a correction cache could be
populated with extra (volatile - ie in shared memory) statistics
* or, if rowcount estimates miss by much, autoanalyze could be scheduled
* consider plan bailout: execute a tempting plan, if it takes too
long or its effective cost raises well above the expected cost, bail
to a safer plan
* account for worst-case performance when evaluating plans
From | Date | Subject | |
---|---|---|---|
Next Message | Justin Pitts | 2011-03-23 18:02:14 | Re: Shouldn't we have a way to avoid "risky" plans? |
Previous Message | Josh Berkus | 2011-03-23 17:12:18 | Shouldn't we have a way to avoid "risky" plans? |