From: | Greg Nancarrow <gregn4422(at)gmail(dot)com> |
---|---|
To: | Amit Langote <amitlangote09(at)gmail(dot)com> |
Cc: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, "Hou, Zhijie" <houzj(dot)fnst(at)cn(dot)fujitsu(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>, vignesh C <vignesh21(at)gmail(dot)com>, David Rowley <dgrowleyml(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, "Tsunakawa, Takayuki" <tsunakawa(dot)takay(at)fujitsu(dot)com>, "Tang, Haiying" <tanghy(dot)fnst(at)cn(dot)fujitsu(dot)com> |
Subject: | Re: Parallel INSERT (INTO ... SELECT ...) |
Date: | 2021-03-08 10:24:09 |
Message-ID: | CAJcOf-f62i3W0KstDDrx+QBZFJ+pojhtvKxUc_MX8s6Y_FQcZw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Mon, Mar 8, 2021 at 7:42 PM Greg Nancarrow <gregn4422(at)gmail(dot)com> wrote:
>
> On Mon, Mar 8, 2021 at 6:25 PM Amit Langote <amitlangote09(at)gmail(dot)com> wrote:
> >
> > A couple of things that look odd in v24-0001 (sorry if there were like
> > that from the beginning):
> >
> > +static bool
> > +target_rel_max_parallel_hazard(max_parallel_hazard_context *context)
> > +{
> > + bool max_hazard_found;
> > +
> > + Relation targetRel;
> > +
> > + /*
> > + * The target table is already locked by the caller (this is done in the
> > + * parse/analyze phase), and remains locked until end-of-transaction.
> > + */
> > + targetRel = table_open(context->target_rte->relid,
> > + context->target_rte->rellockmode);
> >
> > The comment seems to imply that the lock need not be retaken here, but
> > the code does. Maybe it's fine to pass NoLock here, but use
> > rellockmode when locking partitions, because they would not have been
> > locked by the parser/analyzer.
>
> Actually, it was originally NoLock, but I think in your suggested
> changes (v15_delta.diff) to better integrate the extra parallel-safety
> checks into max_parallel_hazard(), you changed "NoLock" to
> "context->targetRTE->rellockmode"..
> Having said that, since the table is already locked, I think that
> using "context->target_rte->rellockmode" in this case just results in
> the lock reference count being incremented, so I doubt it really makes
> any difference, since it's locked until end-of-transaction.
> I'll revert it back to NoLock.
>
> >Which brings me to:
> >
> > +static bool
> > +target_rel_partitions_max_parallel_hazard(Relation rel,
> > + max_parallel_hazard_context *context)
> > +{
> > ...
> > + for (i = 0; i < pdesc->nparts; i++)
> > + {
> > + bool max_hazard_found;
> > + Relation part_rel;
> > +
> > + /*
> > + * The partition needs to be locked, and remain locked until
> > + * end-of-transaction to ensure its parallel-safety state is not
> > + * hereafter altered.
> > + */
> > + part_rel = table_open(pdesc->oids[i], AccessShareLock);
> >
> > Not that I can prove there to be any actual hazard, but we tend to
> > avoid taking locks with different strengths in the same query as would
> > occur with this piece of code. We're locking the partition with
> > AccessShareLock here, but the executor will lock the partition with
> > the stronger RowExclusiveLock mode before trying to insert into it.
> > Better to use the stronger mode to begin with or just use the target
> > partitioned table's RTE's rellockmode which would be RowExclusiveLock.
> > You can see that that's what AcquireExecutorLocksOnPartitions() will
> > do in the generic plan case.
> >
>
> OK, I see what you mean, best to use the target partitioned table's
> RTE's rellockmode in this case then.
> I'll update the patch accordingly.
>
I've attached an updated set of patches with the suggested locking changes.
Regards,
Greg Nancarrow
Fujitsu Australia
Attachment | Content-Type | Size |
---|---|---|
v25-0001-Enable-parallel-SELECT-for-INSERT-INTO-.-SELECT.patch | application/octet-stream | 65.8 KB |
v25-0002-Add-new-parallel-insert-GUC-and-table-options.patch | application/octet-stream | 19.6 KB |
v25-0003-Parallel-SELECT-for-INSERT-INTO-.-SELECT-advanced-tests.patch | application/octet-stream | 45.2 KB |
From | Date | Subject | |
---|---|---|---|
Next Message | Peter Eisentraut | 2021-03-08 10:25:13 | Re: pg_upgrade failing for 200+ million Large Objects |
Previous Message | Laurenz Albe | 2021-03-08 10:19:55 | Re: Using COPY FREEZE in pgbench |