From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | pgsql-hackers(at)postgreSQL(dot)org |
Subject: | Assorted leaks and weirdness in parallel execution |
Date: | 2017-08-31 15:09:37 |
Message-ID: | 8670.1504192177@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
I complained a couple weeks ago that nodeGatherMerge looked like it
leaked a lot of memory when commanded to rescan. Attached are three
proposed patches that, in combination, demonstrably result in zero
leakage across repeated rescans.
The first thing I noticed when I started digging into this was that
there was some leakage in TopMemoryContext, which seemed pretty weird.
What it turned out to be was on_dsm_detach callback registration records.
This happens because, although the comments for shm_mq_attach() claim
that shm_mq_detach() will free the shm_mq_handle, it does no such thing,
and it doesn't worry about canceling the on_dsm_detach registration
either. So over repeated attach/detach cycles, we leak shm_mq_handles
and also callback registrations. This isn't just a memory leak: it
means that, whenever we finally do detach from the DSM segment, we'll
execute a bunch of shm_mq_detach() calls pointed at long-since-detached-
and-reused shm_mq structs. That seems incredibly dangerous. It manages
not to fail ATM because our stylized use of DSM means that a shm_mq
would only ever be re-used as another shm_mq; so the only real effect is
that our last counterparty process, if still attached, would receive N
SetLatch events not just one. But it's going to crash and burn someday.
For extra fun, the error MQs weren't ever explicitly detached from,
just left to rot until on_dsm_detach time. Although we did pfree the
shm_mq_handles out from under them.
So the first patch attached cleans this up by making shm_mq_detach
do what it was advertised to, ie fully reverse what shm_mq_attach
does. That means it needs to take a shm_mq_handle, not a bare shm_mq,
but that actually makes the callers cleaner anyway. (With this patch,
there are no callers of shm_mq_get_queue(); should we remove that?)
The second patch cleans up assorted garden-variety leaks when
rescanning a GatherMerge node, by having it allocate its work
arrays just once and then re-use them across rescans.
The last patch fixes the one remaining leak I saw after applying the
first two patches, namely that execParallel.c leaks the array palloc'd
by ExecParallelSetupTupleQueues --- just the array storage, not any of
the shm_mq_handles it points to. The given patch just adds a pfree
to ExecParallelFinish, but TBH I find this pretty unsatisfactory.
It seems like a significant modularity violation that execParallel.c
is responsible for creating those shm_mqs but not for cleaning them up.
That cleanup currently happens as a result of DestroyTupleQueueReader
calls done by nodeGather.c or nodeGatherMerge.c. I'm tempted to
propose that we should move both the creation and the destruction of
the TupleQueueReaders into execParallel.c; the current setup is not
just weird but requires duplicative coding in the Gather nodes.
(That would make it more difficult to do the early reader destruction
that nodeGather currently does, but I am not sure we care about that.)
Another thing that seems like a poor factorization choice is that
DestroyTupleQueueReader is charged with doing shm_mq_detach even though
tqueue.c did not do the shm_mq_attach ... should we rethink that?
Comments?
regards, tom lane
Attachment | Content-Type | Size |
---|---|---|
fix-shm-mq-management.patch | text/x-diff | 6.3 KB |
fix-gathermerge-leaks.patch | text/x-diff | 7.0 KB |
fix-pei-leaks.patch | text/x-diff | 722 bytes |
From | Date | Subject | |
---|---|---|---|
Next Message | Bossart, Nathan | 2017-08-31 15:25:58 | Re: [Proposal] Allow users to specify multiple tables in VACUUM commands |
Previous Message | Michael Paquier | 2017-08-31 14:44:41 | Re: Hooks to track changed pages for backup purposes |