From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Jeroen Vermeulen <jtv(at)xs4all(dot)nl> |
Cc: | Greg Stark <gsstark(at)mit(dot)edu>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Bart Samwel <bart(at)samwel(dot)tk>, Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Avoiding bad prepared-statement plans. |
Date: | 2010-02-21 12:37:50 |
Message-ID: | 603c8f071002210437w43a58131r85bbe7eff90bc266@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Wed, Feb 17, 2010 at 5:52 PM, Jeroen Vermeulen <jtv(at)xs4all(dot)nl> wrote:
> I may have cut this out of my original email for brevity... my impression is
> that the planner's estimate is likely to err on the side of scalability, not
> best-case response time; and that this is more likely to happen than an
> optimistic plan going bad at runtime.
Interestingly, most of the mistakes that I have seen are in the
opposite direction.
> Yeb points out a devil in the details though: the cost estimate is unitless.
> We'd have to have some orders-of-magnitude notion of how the estimates fit
> into the picture of real performance.
I'm not sure to what extent you can assume that the cost is
proportional to the execution time. I seem to remember someone
(Peter?) arguing that they're not related by any fixed ratio, partly
because things like page costs vs. cpu costs didn't match physical
reality, and that in fact some attempts to gather better empirically
better values for things like random_page_cost and seq_page_cost
actually ended up making the plans worse rather than better. It would
be nice to see some research in this area...
...Robert
From | Date | Subject | |
---|---|---|---|
Next Message | Greg Stark | 2010-02-21 12:57:13 | Re: parallelizing subplan execution (was: explain and PARAM_EXEC) |
Previous Message | Robert Haas | 2010-02-21 11:43:57 | getting to beta |