From: | David Osborne <david(at)qcode(dot)co(dot)uk> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Slow 3 Table Join with v bad row estimate |
Date: | 2015-11-10 17:31:52 |
Message-ID: | CAKmpXCf4JN6T0y202YOO6Lbw-Bi8NWsSyyPaegF81vHZpkjthg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Ok - wow.
Adding that index, I get the same estimate of 1 row, but a runtime of
~450ms.
A 23000ms improvement.
http://explain.depesz.com/s/TzF8h
This is great. So as a general rule of thumb, if I see a Join Filter
removing an excessive number of rows, I can check if that condition can be
added to an index from the same table which is already being scanned.
Thanks for this!
On 10 November 2015 at 17:05, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>
> But taking a step back, it seems like the core problem in your explain
> output is here:
>
> >> -> Nested Loop (cost=1.29..83263.71 rows=1 width=24) (actual
> time=0.196..23799.930 rows=53595 loops=1)
> >> Join Filter: (o.po_id = p.po_id)
> >> Rows Removed by Join Filter: 23006061
> >> Buffers: shared hit=23217993 dirtied=1
>
> That's an awful lot of rows being formed by the join only to be rejected.
> You should try creating an index on
> branch_purchase_order_products(po_id, product_code)
> so that the po_id condition could be enforced at the inner indexscan
> instead of the join.
>
>
>
From | Date | Subject | |
---|---|---|---|
Next Message | Igor Neyman | 2015-11-10 18:38:30 | Re: Slow 3 Table Join with v bad row estimate |
Previous Message | Tom Lane | 2015-11-10 17:05:30 | Re: Slow 3 Table Join with v bad row estimate |