From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Ron Mayer <rm_pg(at)cheapcomplexdevices(dot)com>, decibel <decibel(at)decibel(dot)org>, Greg Smith <gsmith(at)gregsmith(dot)com>, jd(at)commandprompt(dot)com, Grzegorz Jaskiewicz <gj(at)pointblue(dot)com(dot)pl>, Bernd Helmle <mailings(at)oopsware(dot)de>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: The science of optimization in practical terms? |
Date: | 2009-02-18 20:32:53 |
Message-ID: | 21655.1234989173@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Robert Haas <robertmhaas(at)gmail(dot)com> writes:
> ... At any rate, we'd need to save quite
> a bit to pay for carting around best and worst case costs for every
> plan we consider.
Another problem with this is it doesn't really do anything to solve the
problem we were just discussing, namely having an intelligent way of
combining inaccurate estimates for WHERE clauses. If you just take a
range of plausible values and multiply then it doesn't take very many
clauses to get to a range of [0,1] --- or at least a range of
probabilities wide enough to be unhelpful.
An idea that I think has been mentioned before is to try to identify
cases where we can *prove* there is at most one row emitted by a
sub-path (eg, because of a unique index, DISTINCT subplan, etc). Then
we could penalize nestloops with outer relations that weren't provably a
single row. This is basically restricting the notion of estimation
confidence to a special case that's particularly important for SQL.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2009-02-18 20:43:18 | Re: [COMMITTERS] pgsql: Start background writer during archive recovery. |
Previous Message | Robert Haas | 2009-02-18 20:13:14 | Re: The science of optimization in practical terms? |