From: | Bruce Momjian <bruce(at)momjian(dot)us> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Choosing the cheapest optimizer cost |
Date: | 2016-06-21 16:20:46 |
Message-ID: | 20160621162046.GJ24184@momjian.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Tue, Jun 21, 2016 at 11:17:19AM -0400, Robert Haas wrote:
> If the index scans are parameterized by values from the seq scan,
> which is likely the situation in which this sort of plan will be
> generated, we'll pay the extra cost of building the hash table once
> per row in something_big.
>
> I think we should consider switching from a nested loop to a hash join
> on the fly if the outer relation turns out to be bigger than expected.
> We could work out during planning what the expected breakeven point
> is; if the actual outer row count passes that, switch to a hash join.
> This has been discussed before, but nobody's tried to do the work,
> AFAIK.
Yes, the idea of either adjusting the execution plan when counts are
inaccurate, or feeding information about misestimation back to the
optimizer for future queries is something I hope we try someday.
--
Bruce Momjian <bruce(at)momjian(dot)us> http://momjian.us
EnterpriseDB http://enterprisedb.com
+ As you are, so once was I. As I am, so you will be. +
+ Ancient Roman grave inscription +
From | Date | Subject | |
---|---|---|---|
Next Message | Martín Marqués | 2016-06-21 16:45:19 | Re: [HACKERS] PgQ and pg_dump |
Previous Message | Bruce Momjian | 2016-06-21 16:15:34 | Re: Lets (not) break all the things. Was: [pgsql-advocacy] 9.6 -> 10.0 |