From: | David Steele <david(at)pgmasters(dot)net> |
---|---|
To: | Antonin Houska <ah(at)cybertec(dot)at> |
Cc: | Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Re: Suspicious call of initial_cost_hashjoin() |
Date: | 2018-03-01 19:45:22 |
Message-ID: | c34c8fcd-abfa-254a-b447-2f1003c6c730@pgmasters.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Hi Antonin,
On 12/22/17 6:13 AM, Thomas Munro wrote:
> On Fri, Dec 22, 2017 at 10:45 PM, Antonin Houska <ah(at)cybertec(dot)at> wrote:
>> try_partial_hashjoin_path() passes constant true to for the parallel_hash
>> argument of initial_cost_hashjoin(). Shouldn't it instead pass the
>> parallel_hash argument that it receives?
>
> Thanks. Yeah. When initial_cost_hashjoin() calls
> get_parallel_divisor() on a non-partial inner path I think it would
> return 1.0, so no damage was done there, but when
> ExecChooseHashTableSize() receives try_combined_work_mem == true it
> might underestimate the number of batches required for a partial hash
> join without parallel hash, because it would incorrectly assume that a
> single batch join could use the combined work_mem budget. This was
> quite well hidden because ExecHashTableCreate() calls
> ExecChooseHashTableSize() again (rather than reusing the results from
> planning time), so the bad nbatch estimate doesn't show up anywhere.
Does this look right to you? If so, can you sign up as a reviewer and
mark it Ready for Committer?
Thanks,
--
-David
david(at)pgmasters(dot)net
From | Date | Subject | |
---|---|---|---|
Next Message | Andres Freund | 2018-03-01 19:48:24 | Re: [HACKERS] Surjective functional indexes |
Previous Message | David Steele | 2018-03-01 19:33:17 | Re: Parallel Aggregates for string_agg and array_agg |