From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Merlin Moncure <mmoncure(at)gmail(dot)com> |
Cc: | pgsql-bugs <pgsql-bugs(at)postgresql(dot)org> |
Subject: | Re: Introducing floating point cast into filter drastically changes row estimate |
Date: | 2012-10-24 20:33:00 |
Message-ID: | 9033.1351110780@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-bugs |
Merlin Moncure <mmoncure(at)gmail(dot)com> writes:
> Yeah -- I have a case where a large number of joins are happening that
> have a lot of filtering based on expressions and things like that.
Might be worth your while to install some indexes on those expressions,
if only to trigger collection of stats about them.
> I've been thinking about this all morning and I think there's a
> fundamental problem here: the planner is using low confidence
> estimates in order to pick plans that really only be used when the
> plan is relatively precise. In particular, I think the broad
> assumption that rows pruned via default selectivity should be capped,
> say to the lesser of 1000 or the greatest known value if otherwise
> constrained.
I think that any such thing would probably just move the pain around.
As a recent example, just the other day somebody was bleating about
a poor rowcount estimate for a pattern match expression, which I suspect
was due to the arbitrary limit in patternsel() on how small a
selectivity it will believe. I'd rather look for excuses to remove
those sorts of things than add more.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Merlin Moncure | 2012-10-24 20:51:56 | Re: Introducing floating point cast into filter drastically changes row estimate |
Previous Message | Merlin Moncure | 2012-10-24 19:47:48 | Re: Introducing floating point cast into filter drastically changes row estimate |