From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Assorted leaks and weirdness in parallel execution |
Date: | 2017-08-31 19:08:37 |
Message-ID: | CA+Tgmoa8mNQEj9X8+hwQH2gMxgxDB_Pgz3rMGJxFzTeLM=+6eQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Thu, Aug 31, 2017 at 2:13 PM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> Yeah, it is different. What I'm looking at is that nodeGather does
> DestroyTupleQueueReader as soon as it's seen EOF on a given tuple queue.
> That can't save any worker cycles. The reason seems to be that it wants
> to collapse its array of TupleQueueReader pointers so only live queues are
> in it. That's reasonable, but I'm inclined to implement it by making the
> Gather node keep a separate working array of pointers to only the live
> TupleQueueReaders. The ParallelExecutorInfo would keep the authoritative
> array of all TupleQueueReaders that have been created, and destroy them in
> ExecParallelFinish.
Hmm, that's a thought.
> Your point is that we want to shut down the TupleQueueReaders immediately
> on rescan, which we do already. Another possible scenario is to shut them
> down once we've reached the passed-down tuple limit (across the whole
> Gather, not per-child which is what 3452dc524 implemented). I don't think
> what I'm suggesting would complicate that.
Yeah. I think the way to do that would be to implement what is
mentioned in the comment for ExecShutdownNode: call that function on
the child plan as soon as the LIMIT is filled.
(Hmm, the reference to someday covering FDW in the header of that
comment is obsolete, isn't it? Another oversight on my part.)
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Haas | 2017-08-31 19:11:10 | Re: Parallel worker error |
Previous Message | Jacob Champion | 2017-08-31 18:53:19 | [PATCH] Assert that the correct locks are held when calling PageGetLSN() |