From: | Igor Neyman <ineyman(at)perceptron(dot)com> |
---|---|
To: | David Osborne <david(at)qcode(dot)co(dot)uk>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: Slow 3 Table Join with v bad row estimate |
Date: | 2015-11-10 18:38:30 |
Message-ID: | A76B25F2823E954C9E45E32FA49D70ECCD583368@mail.corp.perceptron.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
From: pgsql-performance-owner(at)postgresql(dot)org [mailto:pgsql-performance-owner(at)postgresql(dot)org] On Behalf Of David Osborne
Sent: Tuesday, November 10, 2015 12:32 PM
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: [PERFORM] Slow 3 Table Join with v bad row estimate
Ok - wow.
Adding that index, I get the same estimate of 1 row, but a runtime of ~450ms.
A 23000ms improvement.
http://explain.depesz.com/s/TzF8h
This is great. So as a general rule of thumb, if I see a Join Filter removing an excessive number of rows, I can check if that condition can be added to an index from the same table which is already being scanned.
Thanks for this!
David,
I believe the plan you are posting is the old plan.
Could you please post explain analyze with the index that Tom suggested?
Regards,
Igor Neyman
From | Date | Subject | |
---|---|---|---|
Next Message | Jason Jho | 2015-11-10 21:42:33 | Hanging query on a fresh restart |
Previous Message | David Osborne | 2015-11-10 17:31:52 | Re: Slow 3 Table Join with v bad row estimate |