From: | Bart Samwel <bart(at)samwel(dot)tk> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Jeroen Vermeulen <jtv(at)xs4all(dot)nl>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Avoiding bad prepared-statement plans. |
Date: | 2010-02-11 12:09:33 |
Message-ID: | ded01eb21002110409m5b729dffn168061dae0cad213@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Hi Robert,
On Tue, Feb 9, 2010 at 17:43, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
> On Tue, Feb 9, 2010 at 7:08 AM, Jeroen Vermeulen <jtv(at)xs4all(dot)nl> wrote:
> > = Projected-cost threshold =
> >
> > If a prepared statement takes parameters, and the generic plan has a high
> > projected cost, re-plan each EXECUTE individually with all its parameter
> > values bound. It may or may not help, but unless the planner is vastly
> > over-pessimistic, re-planning isn't going to dominate execution time for
> > these cases anyway.
>
> How high is high?
>
Perhaps this could be based on a (configurable?) ratio of observed planning
time and projected execution time. I mean, if planning it the first time
took 30 ms and projected execution time is 1 ms, then by all means NEVER
re-plan. But if planning the first time took 1 ms and resulted in a
projected execution time of 50 ms, then it's relatively cheap to re-plan
every time (cost increase per execution is 1/50 = 2%), and the potential
gains are much greater (taking a chunk out of 50 ms adds up quickly).
Cheers,
Bart
From | Date | Subject | |
---|---|---|---|
Next Message | Boszormenyi Zoltan | 2010-02-11 12:13:24 | Re: CommitFest status summary 2010-01-27 |
Previous Message | Simon Riggs | 2010-02-11 11:18:34 | Re: Bugs in b-tree dead page removal |