From: | Marko Tiikkaja <marko(at)joh(dot)to> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: proposal: LISTEN * |
Date: | 2015-11-19 16:08:44 |
Message-ID: | 564DF40C.2040004@joh.to |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 11/19/15 4:21 PM, Tom Lane wrote:
> Marko Tiikkaja <marko(at)joh(dot)to> writes:
>> I've in the past wanted to listen on all notification channels in the
>> current database for debugging purposes. But recently I came across
>> another use case. Since having multiple postgres backends listening for
>> notifications is very inefficient (one Thursday I found out 30% of our
>> CPU time was spent spinning on s_locks around the notification code), it
>> makes sense to implement a notification server of sorts which then
>> passes on notifications from postgres to interested clients.
>
> ... and then you gotta get the notifications to the clients, so it seems
> like this just leaves the performance question hanging.
I'm not sure which performance question you think is left hanging. If
every client is connected to postgres, you're waking up tens if not
hundreds of processes tens if not hundreds of times per second. Each of
them start a transaction, check which notifications are visible in the
queue from clog and friends, go through the tails of every other process
to see whether they should advance the tail pointer of the queue, commit
the transaction and go back to sleep only to be immediately woken up again.
If they're not connected to postgres directly, this kind of complex
processing only happens once, and then the notification server just
unconditionally serves all notifications to the clients based on a
simple map lookup. It should be trivial to see how the overhead is avoided.
> Why don't we find
> and fix the actual performance issue (assuming that 07e4d03fb wasn't it)?
07e4d03fb wasn't it, no.
> The reason I'm not terribly enthused about this proposal is that some
> implementations of LISTEN (for example, our pre-9.0 one) would be unable
> to support LISTEN *. It's not too hard to imagine that at some point we
> might wish to redo the existing implementation to reduce the overhead of
> all listeners seeing all messages, and then having promised we could do
> LISTEN * would be a serious restriction on our flexibility. So while
> I'm not necessarily trying to veto the idea, I think it has significant
> opportunity cost, and I'd like to see a more solid rationale than this
> one before we commit to it.
A reasonable point.
> In any case, it would be good to understand exactly what's the performance
> issue that's biting you. Can you provide a test case that reproduces
> that behavior?
I've attached a Go program which simulates quite accurately the
LISTEN/NOTIFY-part of our setup. What it does is:
1) Open 50 connections, and issue a LISTEN in all of them
2) Open another 50 connections, and deliver one notification every
750 milliseconds from each of them
(My apologies for the fact that it's written in Go. It's the only thing
I can produce without spending significant amount of time working on this.)
On the test server I'm running on, it doesn't look quite as bad as the
profiles we had in production, but s_lock is still the worst offender in
the profiles, called from:
- 80.33% LWLockAcquire
+ 48.34% asyncQueueReadAllNotifications
+ 23.09% SIGetDataEntries
+ 16.92% SimpleLruReadPage_ReadOnly
+ 10.21% TransactionIdIsInProgress
+ 1.27% asyncQueueAdvanceTail
which roughly looks like what I recall from our actual production profiles.
.m
Attachment | Content-Type | Size |
---|---|---|
notify.go | text/plain | 852 bytes |
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Haas | 2015-11-19 16:09:38 | Re: [PATCH] Refactoring of LWLock tranches |
Previous Message | Robert Haas | 2015-11-19 16:05:36 | Re: Re: In-core regression tests for replication, cascading, archiving, PITR, etc. |