From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Sam Mason <sam(at)samason(dot)me(dot)uk> |
Cc: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: The science of optimization in practical terms? |
Date: | 2009-02-18 15:57:52 |
Message-ID: | 603c8f070902180757m15b0bd23ib7256cc8ef44447e@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
> If the planning was done with some sort of interval then you'd be
> able to encode information about how well your stats characterized the
> underlying data. Traditionally awkward things like amount of cache
> would serve to drop the lower bound, but not alter the upper. The
> planner then automatically propagate performance information through the
> calculations, i.e. a nested loop with a tight estimate on a small number
> of rows joined to a table with a wider estimate of a small number of
> rows would keep the low lower bound but the upper-bound would tend to
> make the planner stay away.
Yeah, I thought about this too, but it seems like overkill for the
problem at hand, and as you say it's not clear you'd get any benefit
out of the upper bound anyway. I was thinking of something simpler:
instead of directly multiplying 0.005 into the selectivity every time
you find something incomprehensible, keep a count of the number of
incomprehensible things you saw and at the end multiply by 0.005/N.
That way more unknown quals look more restrictive than fewer, but
things only get linearly wacky instead of exponentially wacky.
...Robert
From | Date | Subject | |
---|---|---|---|
Next Message | Heikki Linnakangas | 2009-02-18 15:58:41 | pgsql: Start background writer during archive recovery. |
Previous Message | Tom Lane | 2009-02-18 15:47:25 | Re: pg_migrator progress |