From: | Amit Langote <amitlangote09(at)gmail(dot)com> |
---|---|
To: | Greg Nancarrow <gregn4422(at)gmail(dot)com> |
Cc: | "Hou, Zhijie" <houzj(dot)fnst(at)cn(dot)fujitsu(dot)com>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>, vignesh C <vignesh21(at)gmail(dot)com>, David Rowley <dgrowleyml(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, "Tsunakawa, Takayuki" <tsunakawa(dot)takay(at)fujitsu(dot)com>, "Tang, Haiying" <tanghy(dot)fnst(at)cn(dot)fujitsu(dot)com> |
Subject: | Re: Parallel INSERT (INTO ... SELECT ...) |
Date: | 2021-02-17 13:34:42 |
Message-ID: | CA+HiwqFBy2RURbMhE+WjNySsuFKNb65qOYLTasu2ER2xedEGMw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Wed, Feb 17, 2021 at 10:44 AM Greg Nancarrow <gregn4422(at)gmail(dot)com> wrote:
> On Wed, Feb 17, 2021 at 12:19 AM Amit Langote <amitlangote09(at)gmail(dot)com> wrote:
> > On Mon, Feb 15, 2021 at 4:39 PM Greg Nancarrow <gregn4422(at)gmail(dot)com> wrote:
> > > On Sat, Feb 13, 2021 at 12:17 AM Amit Langote <amitlangote09(at)gmail(dot)com> wrote:
> > > > On Thu, Feb 11, 2021 at 4:43 PM Greg Nancarrow <gregn4422(at)gmail(dot)com> wrote:
> > > > > Actually, I tried adding the following in the loop that checks the
> > > > > parallel-safety of each partition and it seemed to work:
> > > > >
> > > > > glob->relationOids =
> > > > > lappend_oid(glob->relationOids, pdesc->oids[i]);
> > > > >
> > > > > Can you confirm, is that what you were referring to?
> > > >
> > > > Right. I had mistakenly mentioned PlannerGlobal.invalItems, sorry.
> > > >
> > > > Although it gets the job done, I'm not sure if manipulating
> > > > relationOids from max_parallel_hazard() or its subroutines is okay,
> > > > but I will let the committer decide that. As I mentioned above, the
> > > > person who designed this decided for some reason that it is
> > > > extract_query_dependencies()'s job to populate
> > > > PlannerGlobal.relationOids/invalItems.
> > >
> > > Yes, it doesn't really seem right doing it within max_parallel_hazard().
> > > I tried doing it in extract_query_dependencies() instead - see
> > > attached patch - and it seems to work, but I'm not sure if there might
> > > be any unintended side-effects.
> >
> > One issue I see with the patch is that it fails to consider
> > multi-level partitioning, because it's looking up partitions only in
> > the target table's PartitionDesc and no other.
> >
> > @@ -3060,8 +3066,36 @@ extract_query_dependencies_walker(Node *node,
> > PlannerInfo *context)
> > RangeTblEntry *rte = (RangeTblEntry *) lfirst(lc);
> >
> > if (rte->rtekind == RTE_RELATION)
> > - context->glob->relationOids =
> > - lappend_oid(context->glob->relationOids, rte->relid);
> > + {
> > + PlannerGlobal *glob;
> > +
> > + glob = context->glob;
> > + glob->relationOids =
> > + lappend_oid(glob->relationOids, rte->relid);
> > + if (query->commandType == CMD_INSERT &&
> > + rte->relkind == RELKIND_PARTITIONED_TABLE)
> >
> > The RTE whose relkind is being checked here may not be the INSERT
> > target relation's RTE, even though that's perhaps always true today.
> > So, I suggest to pull the new block out of the loop over rtable and
> > perform its deeds on the result RTE explicitly fetched using
> > rt_fetch(), preferably using a separate recursive function. I'm
> > thinking something like the attached revised version.
>
> Thanks. Yes, I'd forgotten about the fact a partition may itself be
> partitioned, so it needs to be recursive (like in the parallel-safety
> checks).
> Your revised version seems OK, though I do have a concern:
> Is the use of "table_close(rel, NoLock)'' intentional? That will keep
> the lock (lockmode) until end-of-transaction.
I think we always keep any locks on relations that are involved in a
plan until end-of-transaction. What if a partition is changed in an
unsafe manner between being considered safe for parallel insertion and
actually performing the parallel insert?
BTW, I just noticed that exctract_query_dependencies() runs on a
rewritten, but not-yet-planned query tree, that is, I didn't know that
extract_query_dependencies() only populates the CachedPlanSource's
relationOids and not CachedPlan's. The former is only for tracking
the dependencies of an unplanned Query, so partitions should never be
added to it. Instead, they should be added to
PlannedStmt.relationOids (note PlannedStmt belongs to CachedPlan),
which is kind of what your earlier patch did. Needless to say,
PlanCacheRelCallback checks both CachedPlanSource.relationOids and
PlannedStmt.relationOids, so if it receives a message about a
partition, its OID is matched from the latter.
All that is to say that we should move our code to add partition OIDs
as plan dependencies to somewhere in set_plan_references(), which
otherwise populates PlannedStmt.relationOids. I updated the patch to
do that. It also occurred to me that we can avoid pointless adding of
partitions if the final plan won't use parallelism. For that, the
patch adds checking glob->parallelModeNeeded, which seems to do the
trick though I don't know if that's the correct way of doing that.
--
Amit Langote
EDB: http://www.enterprisedb.com
Attachment | Content-Type | Size |
---|---|---|
setrefs-v3.patch | application/octet-stream | 3.5 KB |
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2021-02-17 16:06:51 | Re: Finding cause of test fails on the cfbot site |
Previous Message | Amit Kapila | 2021-02-17 11:01:32 | Re: [PATCH] pg_hba.conf error messages for logical replication connections |