From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Andres Freund <andres(at)2ndquadrant(dot)com> |
Cc: | "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: shared memory message queues |
Date: | 2013-12-05 19:07:39 |
Message-ID: | CA+Tgmob-vCrAzenbWjr_GqOqLCYXmphKpXF056-C296iKDPNRQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Thu, Dec 5, 2013 at 8:29 AM, Andres Freund <andres(at)2ndquadrant(dot)com> wrote:
>> Patch #1, on-dsm-detach-v1.patch, adds the concept of on_dsm_detach
>> hooks
>> [snip]
>> The part of this patch which I
>> suppose will elicit some controversy is that I've had to rearrange
>> on_shmem_exit a bit. It turns out that during shmem_exit, we do
>> "user-level" cleanup, like aborting the transaction, first. We expect
>> that will probably release all of our shared-memory resources.
>
> Hm. The API change of on_shmem_exit() is going to cause a fair bit of
> pain. There are a fair number of extensions out there using it and all
> would need to be littered by version dependent #ifdefs. What about
> providing a version of on_shmem_exit() that allows to specify the exit
> phase, and make on_shmem_exit() a backward compatible wrapper?
Or, we could have on_user_exit() or so and leave on_shmem_exit() alone
altogether, which would probably be transparent for nearly everyone.
I kind of like that better, except that I'm not sure whether
on_user_exit() is any good as a name.
> FWIW, I find on_dsm_detach_cancel() to be a confusing name. I thought it
> might be a variant that is called in different circumstances than
> on_dsm_detach(). Maybe go with cancel_on_shmem_detach(), like with
> cancel_shmem_exit()?
It could be cancel_on_dsm_detach() if you like that better. Not
cancel_on_shmem_detach(), though.
>> Patch #2, shm-toc-v1.patch, provides a facility for sizing a dynamic
>> shared memory segment before creation, and for dividing it up into
>> chunks after it's been created. It therefore serves a function quite
>> similar to RequestAddinShmemSpace, except of course that there is only
>> one main shared memory segment created at postmaster startup time,
>> whereas new dynamic shared memory segments can come into existence on
>> the fly; and it serves even more conspicuously the function of
>> ShmemIndex, which enables backends to locate particular data
>> structures within the shared memory segment. It is however quite a
>> bit simpler than the ShmemIndex mechanism: we don't need the same
>> level of extensibility here that we do for the main shared memory
>> segment, because a new extension need not piggyback on an existing
>> dynamic shared memory segment, but can create a whole segment of its
>> own.
>
> Hm. A couple of questions/comments:
> * How do you imagine keys to be used/built?
> * Won't the sequential search over toc entries become problematic?
> * Some high level overview of how it's supposed to be used wouldn't
> hurt.
> * the estimator stuff doesn't seem to belong in the public header?
The test-shm-mq patch is intended to illustrate these points. In that
case, for an N-worker configuration, there are N+1 keys; that is, N
message queues and one fixed-size control area. The estimator stuff
is critically needed in the public header so that you can size your
DSM to hold the stuff you intend to store in it; again, see
test-shm-mq.
>> Patch #3, shm-mq-v1.patch, is the heart of this series. It creates an
>> infrastructure for sending and receiving messages of arbitrary length
>> using ring buffers stored in shared memory (presumably dynamic shared
>> memory, but hypothetically the main shared memory segment could be
>> used).
>
> The API seems to assume it's in dsm tho?
The header file makes no reference to dsm anywhere, so I'm not sure
why you draw this conclusion.
>> Queues are single-reader and single-writer; they use process
>> latches to implement waiting for the queue to fill (in the case of the
>> reader) or drain (in the case of the writer).
>
> Hm. Too bad, I'd hoped for single-reader, multiple-writer :P
Sure, that might be useful, too, as might multiple-reader,
multi-writer. But those things would come with performance and
complexity trade-offs of their own. I think it's appropriate to leave
the task of creating those things to the people that need them. If it
turns out that this can be enhanced to work like that without
meaningful loss of performance, that's fine by me, too, but I think
this has plenty of merit as-is.
>> A non-blocking mode is also available for situations where other
>> options might lead to deadlock.
>
> That's cool. I wonder if there's a way to discover, on the writing end,
> when new data can be sent. For reading there's a comment about waiting
> for the latch...
It's symmetric. The idea is that you try to read or write data;
should that fail, you wait on your latch and try again when it's set.
> Couple of questions:
> * Some introductory comments about the concurrency approach would be
> good.
Not sure exactly what to write.
> * shm_mq_attach() is talking about BackgroundWorkerHandles - is there a
> limitation that a bgworker has to be involved?
No. The only point of that is that someone might want to start
writing into a message queue before the reader has connected, or
reading before the writer has connected. If they do that and the
counterparty never attaches, they'll hang forever. You can handle
that by writing your own code to make sure both parties have attached
before starting to read or write... but if you happen to have a
BackgroundWorkerHandle handy, you can also pass that to
shm_mq_attach() and it'll return an error code if the background
worker in question is determined to have stopped.
Possibly there could be some similar mechanism to wait for a
non-background-worker to stop, but I haven't thought much about what
that would look like.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Haas | 2013-12-05 19:34:24 | Re: Dynamic Shared Memory stuff |
Previous Message | Josh Berkus | 2013-12-05 19:02:09 | Re: Re: [RFC] Shouldn't we remove annoying FATAL messages from server log? |