From: | "Simon Riggs" <simon(at)2ndquadrant(dot)com> |
---|---|
To: | "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | "pgsql-hackers" <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: LWLock cache line alignment |
Date: | 2005-02-03 14:26:16 |
Message-ID: | KGEFLMPJFBNNLNOOOPLGOEDNCIAA.simon@2ndquadrant.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
> From: Tom Lane [mailto:tgl(at)sss(dot)pgh(dot)pa(dot)us] wrote
> "Simon Riggs" <simon(at)2ndquadrant(dot)com> writes:
> > It looks like padding out LWLock struct would ensure that
> each of those
> > were in separate cache lines?
>
> I've looked at this before and I think it's a nonstarter;
> increasing the
> size of a spinlock to 128 bytes is just not reasonable.
> (Remember there
> are two per buffer.)
Well, the performance is unreasonably poor, so its time to do something,
which might if it is unreasonable for the general case would need to be
port specific.
Also, there's no evidence it would actually help
> anything, because the contention we have been able to measure
> is on only
> one particular lock (BufMgrLock) anyway. But feel free to
> try it to see
> if you can see a difference.
Well, the Wierd Context Switching issue isn't normal contention, which I
agree is stacked up against BufMgrLock.
Overlapping cache lines doesn't cause measurable user space contention,
just poor performance when the delay for cache-spoil, refetch is
required many times more than the minimum (ideal).
I'm thinking that the 128 byte cache line on Intel is sufficiently
higher than the 64 byte cache line on AMD to tip us into different
behaviour at runtime.
Best Regards, Simon Riggs
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2005-02-03 14:31:24 | Re: LWLock cache line alignment |
Previous Message | Simon Riggs | 2005-02-03 14:03:14 | Re: LWLockRelease |