From: | Jeff Janes <jeff(dot)janes(at)gmail(dot)com> |
---|---|
To: | Steve Singer <ssinger(at)ca(dot)afilias(dot)info> |
Cc: | "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: slow bitmap heap scans on pg 9.2 |
Date: | 2013-04-13 01:20:39 |
Message-ID: | CAMkU=1wW1=merFPN9DdNY5vfGshPFkoB8Ze=BjWr=f3Q0KYjPQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Thursday, April 11, 2013, Steve Singer wrote:
>
> I think the reason why it is picking the hash join based plans is because
> of
>
> Index Scan using table_b_1_ptid_orgid_ym_unq on table_b_1 b
> (cost=0.00..503.86 rows=1 width=10) (actual time=0.016..0.017 rows=1
> loops=414249)
> Index Cond: ((a.id = a_id) AND (organization_id = 2)
> AND (year = 2013) AND (month = 3))
> Filter: (product_id = 1)
>
Trying to reason about how the planner estimates costs for the inner side
of nested loops makes my head hurt.
So before doing that, could you run explain (analyze,buffers) on both of
these much simpler (but hopefully morally equivalent to this planner node)
sql:
select * from table_b_1_b where a_id = <some plausible value> and
organization_id=2 and year=2013 and month=3
select * from table_b_1_b where a_id = <some plausible value> and
organization_id=2 and year=2013 and month=3 and product_id=1
Of particular interest here is whether the estimate of 1 row is due to the
specificity of the filter, or if the index clauses alone are specific
enough to drive that estimate. (If you get many rows without the
product_id filter, that would explain the high estimate.).
Please run with the default cost parameters, or if you can't get the right
plan with the defaults, specify what the used parameters were.
Cheers,
Jeff
From | Date | Subject | |
---|---|---|---|
Next Message | Rikard Pavelic | 2013-04-13 08:25:49 | limit is sometimes not pushed in view with order |
Previous Message | Jeff Janes | 2013-04-13 01:20:38 | Re: Segment best size |