From: | Stephen Frost <sfrost(at)snowman(dot)net> |
---|---|
To: | Heikki Linnakangas <hlinnakangas(at)vmware(dot)com> |
Cc: | PostgreSQL-development <pgsql-hackers(at)postgreSQL(dot)org> |
Subject: | Re: Better LWLocks with compare-and-swap (9.4) |
Date: | 2013-05-16 12:25:34 |
Message-ID: | 20130516122534.GS4361@tamriel.snowman.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
* Heikki Linnakangas (hlinnakangas(at)vmware(dot)com) wrote:
> My theory is that after that point all the cores are busy,
> and processes start to be sometimes context switched while holding
> the spinlock, which kills performance. Has anyone else seen that
> pattern?
Isn't this the same issue which has prompted multiple people to propose
(sometimes with code, as I recall) to rip out our internal spinlock
system and replace it with kernel-backed calls which do it better,
specifically by dealing with issues like the above? Have you seen those
threads in the past? Any thoughts about moving in that direction?
> Curiously, I don't see that when connecting pgbench via TCP
> over localhost, only when connecting via unix domain sockets.
> Overall performance is higher over unix domain sockets, so I guess
> the TCP layer adds some overhead, hurting performance, and also
> affects scheduling somehow, making the steep drop go away.
I wonder if the kernel locks around unix domain sockets are helping us
out here, while it's not able to take advantage of such knowledge about
the process that's waiting when it's a TCP connection? Just a hunch.
Thanks,
Stephen
From | Date | Subject | |
---|---|---|---|
Next Message | Stephen Frost | 2013-05-16 12:33:45 | Re: plperl segfault in plperl_trusted_init() on kfreebsd |
Previous Message | Amit Langote | 2013-05-16 12:12:47 | Re: Logging of PAM Authentication Failure |