From: | Alvaro Herrera <alvherre(at)2ndquadrant(dot)com> |
---|---|
To: | Marko Tiikkaja <marko(at)joh(dot)to> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: proposal: LISTEN * |
Date: | 2015-11-19 16:35:40 |
Message-ID: | 20151119163540.GL614468@alvherre.pgsql |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Marko Tiikkaja wrote:
> On the test server I'm running on, it doesn't look quite as bad as the
> profiles we had in production, but s_lock is still the worst offender in the
> profiles, called from:
>
> - 80.33% LWLockAcquire
> + 48.34% asyncQueueReadAllNotifications
> + 23.09% SIGetDataEntries
> + 16.92% SimpleLruReadPage_ReadOnly
> + 10.21% TransactionIdIsInProgress
> + 1.27% asyncQueueAdvanceTail
>
> which roughly looks like what I recall from our actual production profiles.
So the problem is in the bad scalability of LWLock, not in async.c itself?
In master, the spinlock there has been replaced with atomics; does that branch
work better?
--
Álvaro Herrera http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
From | Date | Subject | |
---|---|---|---|
Next Message | Jaime Casanova | 2015-11-19 16:37:09 | Re: GIN pending list clean up exposure to SQL |
Previous Message | Peter Eisentraut | 2015-11-19 16:33:57 | documentation for wal_retrieve_retry_interval |