From: | Jeff Janes <jeff(dot)janes(at)gmail(dot)com> |
---|---|
To: | Timothy Garnett <tgarnett(at)panjiva(dot)com> |
Cc: | "pgsql-performa(dot)" <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: Merge join doesn't seem to break early when I (and planner) think it should - 10.4 |
Date: | 2019-12-27 16:52:18 |
Message-ID: | CAMkU=1yNKN8_79+BAh-zN1y8qRPF9saDzkoj8YqsqLEoPpq4ow@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Thu, Dec 26, 2019 at 6:57 PM Timothy Garnett <tgarnett(at)panjiva(dot)com>
wrote:
>
> So far I've been unable to create a smaller / toy example that exhibits
> the same behavior. Some things that may be unusual about the situation:
> keytbl is bigint and the values are large (all are > 2^48) and sparse/dense
> (big chunks where the id advances by 1 separated by large (> 2^48) regions
> with no rows), the top 200k or so rows of tmp_table by keytbl don't have a
> corresponding row in tbl, and this is a bit of an older dot release
> (10.4). I have a workaround (disabling merge join for the query) so I'm
> mostly trying to figure out what's going on and if I'm understanding the
> situation correctly.
>
Can you share the toy example, using things like random() and
generate_series() to populate it? Preferably scaled down to 10 million
rows or so in the larger table.
Does it reproduce in 10.11? If not, then there is really nothing worth
looking into. Any fix that can be done would certainly not be re-released
into 10.4. And does it reproduce in 12.1 or 13dev? Because chances are
any improvement wouldn't even be back-patches into any minor release at all.
> It's interesting that even if it worked as expected, the merge join plan
> seems a lot riskier in that if the analyze didn't catch a single large
> outlier value of keytbl in tmp_tbl or a row with a large value for keytbl
> was inserted into tmp_tbl since the last analyze it could be forced to walk
> the entire index of the tbl (which based on the filter count looks like it
> involves touching each row of this large table for the filter even if it
> doesn't have a corresponding row to merge to).
>
There has been discussion of building a riskiness factor into the planner,
but it has never gone anywhere. Everything has its own risk (with Hash
Joins, for example, the data could be pathological and everything might
hash to a few buckets, or 32 bits of hashcode might not be enough bits).
By the time you can adequately analyze all the risks, you would probably
learn enough to just make the planner better absolutely, without adding
another dimension to all the things it considers.
Cheers,
Jeff
>
From | Date | Subject | |
---|---|---|---|
Next Message | Philippe BEAUDOIN | 2019-12-30 16:26:10 | Re: proposal: schema variables |
Previous Message | Timothy Garnett | 2019-12-26 20:07:41 | Merge join doesn't seem to break early when I (and planner) think it should - 10.4 |