From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Dawn Hollingsworth <dmh(at)airdefense(dot)net> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Maximum Number Notifications |
Date: | 2004-05-12 19:56:24 |
Message-ID: | 27977.1084391784@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Dawn Hollingsworth <dmh(at)airdefense(dot)net> writes:
> We seem to be having issues lately where the applications are not always
> receiving database notifications. I have been looking through the code
> and I think the problem may stem around a stored procedure that has a
> NOTIFY in it that can possible be triggered hundreds of times in a given
> minute.
> We are currently fixing this problem but my question is how many NOTIFYs
> can be stored in postgres if the client who is listening is not polling
> the database?
I checked the code and verified my recollection that multiple identical
NOTIFY commands within a single transaction are collapsed out (see
ASync_Notify). So if your concern was a loop within a transaction then
there's no issue. However, if you're thinking of hundreds of
transactions per minute that each issue a NOTIFY then those would all
get sent to the client.
If a particular client isn't doing database operations at all at the
moment (is not calling libpq at all) then I'd expect the connected
backend to soon get blocked on a send-queue-full condition. This would
be bad; it looks like it would block while holding lock on pg_listener
which would effectively block all listen/notify activity in the whole
database. However I can't see that any notifies would actually be lost
--- once the sleeping client wakes up and processes some input,
everything would pick up again. Since you're not complaining of the
database freezing up, this doesn't sound like it's your issue anyway.
Assuming that the client *is* doing database operations, libpq will
absorb incoming notifies into an internal list until the client asks
for them. The size of that list is limited by available memory in the
client. Since the event records aren't really very big (less than 50
bytes apiece in 7.2), I think it'd take quite a lot of events to cause a
problem --- hundreds of thousands, perhaps, would start to be an issue.
So the bottom line is that I see no mechanism that would cause notifies
to be lost entirely. You sure it's not a client coding problem?
Also, you do realize that notifies are defined like signals: sending the
same notify condition several times in close succession may result in
only one event being delivered to the client? (Basically, a given
notify condition will be signaled to the client only once per client
transaction, even if multiple NOTIFY commands were executed.) If you're
counting on one-for-one delivery of notifies then you need to redesign.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2004-05-12 20:16:27 | Re: Profiling PostgreSQL |
Previous Message | MaRCeLO PeReiRA | 2004-05-12 18:28:06 | Profiling PostgreSQL |