From: | Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Robert Haas <robertmhaas(at)gmail(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: dsm_unpin_segment |
Date: | 2016-08-09 04:37:25 |
Message-ID: | CAEepm=29DZeWf44-4fzciAQ14iY5vCVZ6RUJ-KR2yzs3hPzrkw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Tue, Aug 9, 2016 at 12:53 PM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> The larger picture here is that Robert is exhibiting a touching but
> unfounded faith that extensions using this feature will contain zero bugs.
> IMO there needs to be some positive defense against mistakes in using the
> pin/unpin API. As things stand, multiple pin requests don't have any
> fatal consequences (especially not on non-Windows), so I have little
> confidence that it's not happening in the field. I have even less
> confidence that there wouldn't be too many unpin requests.
Ok, here is a version that defends against invalid sequences of
pin/unpin calls. I had to move dsm_impl_pin_segment into the block
protected by DynamicSharedMemoryControlLock, so that it could come
after the already-pinned check, but before updating any state, since
it makes a Windows syscall that can fail. That said, I've only tested
on Unix and will need to ask someone to test on Windows.
> What exactly
> is an extension going to be doing to ensure that it doesn't do too many of
> one or the other?
An extension that manages segment lifetimes like this needs a
carefully designed protocol to get it right, probably involving state
in another shared memory area and some interlocking, not to mention a
lot of thought about cleanup.
Here's one use case: I have a higher level object, a
multi-segment-backed shared memory allocator, which owns any number of
segments that together form a shared memory area. The protocol is
that the allocator always pins segments when it needs to create new
ones, because they need to survive as long as the control segment,
even though no one backend is guaranteed to have all of the auxiliary
segments mapped in (since they're created and attached on demand).
But when the control segment is detached by all backends and is due to
be destroyed, then we need to unpin all the auxiliary segments so they
can also be destroyed, and that can be done from an on_dsm_detach
callback on the control segment. So I'm riding on the coat tails of
the existing cleanup mechanism for the control segment, while making
sure that the auxiliary segments get pinned and unpinned exactly once.
I'll have more to say about that when I post that patch...
--
Thomas Munro
http://www.enterprisedb.com
Attachment | Content-Type | Size |
---|---|---|
dsm-unpin-segment-v2.patch | application/octet-stream | 9.7 KB |
From | Date | Subject | |
---|---|---|---|
Next Message | Sridhar N Bamandlapally | 2016-08-09 05:24:42 | Re: Surprising behaviour of \set AUTOCOMMIT ON |
Previous Message | Tsunakawa, Takayuki | 2016-08-09 04:17:28 | Re: Wait events monitoring future development |