From: | Amit Khandekar <amitdkhan(dot)pg(at)gmail(dot)com> |
---|---|
To: | amul sul <sulamul(at)gmail(dot)com> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Robert Haas <rhaas(at)postgresql(dot)org>, pgsql-committers(at)postgresql(dot)org |
Subject: | Re: pgsql: Support Parallel Append plan nodes. |
Date: | 2017-12-06 10:01:44 |
Message-ID: | CAJ3gD9cJQ4d-XhmZ6BqM9rMM2KDBfpkdgOAb4+psz56uBuMQ_A@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-committers pgsql-hackers |
On 6 December 2017 at 14:31, amul sul <sulamul(at)gmail(dot)com> wrote:
> Copying & reverting to Amit Khandekar's email here:
>
> On Wed, Dec 6, 2017 at 11:45 AM, amul sul <sulamul(at)gmail(dot)com> wrote:
>>> Thanks Tom for the crash analysis, I think this is the same reason that
>>> Rajkumar's reported case[1] was crashing only with partition-wise-join = on.
>>> I tried to fix this crash[2] but missed the place where I have added assert
>>> check in the assumption that we always coming from the previous check in the
>>> while loop.
>>>
>>> Instead, assert check we need a similar bailout logic[2] before looping back to
>>> first partial plan. Attached patch does the same, I've tested with
>>> parallel_leader_participation = off setting as suggested by Andres, make check
>>> looks good except there is some obvious regression diff.
>>>
>>> 1] https://postgr.es/m/CAKcux6m+6nTO6C08kKaj-Waffvgvp-9SD3RCGStX=Mzk0gQU8g@mail.gmail.com
>>> 2] https://postgr.es/m/CAAJ_b975k58H+Ed4=p0vbJunwO2reOMX5CVB8_R=JmXxY3uW=Q@mail.gmail.com
>>>
>>
>> @@ -506,7 +506,14 @@ choose_next_subplan_for_worker(AppendState *node)
>> node->as_whichplan = pstate->pa_next_plan++;
>> if (pstate->pa_next_plan >= node->as_nplans)
>> {
>> - Assert(append->first_partial_plan < node->as_nplans);
>> + /* No partial plans then bail out. */
>> + if (append->first_partial_plan >= node->as_nplans)
>> + {
>> + pstate->pa_next_plan = INVALID_SUBPLAN_INDEX;
>> + node->as_whichplan = INVALID_SUBPLAN_INDEX;
>> + LWLockRelease(&pstate->pa_lock);
>> + return false;
>> + }
>> pstate->pa_next_plan = append->first_partial_plan;
>>
>> In the above code, the fact that we have not bailed out from the
>> earlier for loop means that we have already found an unfinished plan
>> and node->as_whichplan is set to that plan. So while setting the next
>> plan above for the other workers to pick, we should not return false,
>> nor should we set node->as_whichplan to INVALID_SUBPLAN_INDEX.
>> Instead, just set pa_next_plan to INVALID_SUBPLAN_INDEX. Otherwise,
>> the chosen plan won't get executed at all.
>>
>
> Understood, thanks for the review. Updated patch attached.
>
> 1] https://postgr.es/m/CAJ3gD9e3_D3fFqzWBFYoaF-6yCXgbOFn3Mb-pgd_mxvjpsw7Rw@mail.gmail.com
This looks good.
In attached revised patch, just added some comments in the changes that you did.
>
> Regards,
> Amul
--
Thanks,
-Amit Khandekar
EnterpriseDB Corporation
The Postgres Database Company
Attachment | Content-Type | Size |
---|---|---|
Parallel-Append-crash-fix-v3.patch | application/octet-stream | 863 bytes |
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Haas | 2017-12-06 13:46:22 | pgsql: Fix Parallel Append crash. |
Previous Message | amul sul | 2017-12-06 09:01:24 | Re: pgsql: Support Parallel Append plan nodes. |
From | Date | Subject | |
---|---|---|---|
Next Message | Amit Khandekar | 2017-12-06 10:21:13 | Re: [HACKERS] Parallel Append implementation |
Previous Message | Andres Freund | 2017-12-06 09:37:17 | fixed tuple descs (was JIT compiling expressions/deform) |