From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Bruce Momjian <pgman(at)candle(dot)pha(dot)pa(dot)us> |
Cc: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Some interesting results from tweaking spinlocks |
Date: | 2002-01-05 05:34:57 |
Message-ID: | 10054.1010208897@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Bruce Momjian <pgman(at)candle(dot)pha(dot)pa(dot)us> writes:
> My guess is that the lock is held for more than a few instructions, at
> least in some cases.
It is not. LWLock and a couple of other places are the only direct uses
of spinlocks, and none of them execute more than a few lines of C code
while holding a spinlock. Nor do they touch any wide range of memory
while doing so; your thought about cache stalls is a good one, but I
don't buy it.
I've performed some profiling on that 4-way SMP machine, and it might
be useful to look at the call patterns for LWLock, which is certainly
the main use of spinlocks. This is an extract from gprof for one
backend process in a 25-client pgbench run, using CVS + second version
of LWLock patch:
-----------------------------------------------
0.00 0.00 1/420232 ExtendCLOG [475]
0.00 0.00 1/420232 InitBufferPool [517]
0.00 0.00 1/420232 InitBackendSharedInvalidationState [539]
0.00 0.00 1/420232 CleanupInvalidationState [547]
0.00 0.00 1/420232 LockMethodTableInit [511]
0.00 0.00 4/420232 GetPageWithFreeSpace [516]
0.00 0.00 8/420232 WaitIO [523]
0.00 0.00 8/420232 RecordAndGetPageWithFreeSpace [501]
0.00 0.00 10/420232 ReleaseAndReadBuffer [513]
0.00 0.00 11/420232 XLogWrite [266]
0.00 0.00 12/420232 ShmemInitStruct [494]
0.00 0.00 14/420232 SetBufferCommitInfoNeedsSave [509]
0.00 0.00 128/420232 ProcSleep [450]
0.00 0.00 289/420232 BufferReplace [304]
0.00 0.00 400/420232 TransactionIdSetStatus [263]
0.00 0.00 400/420232 GetNewObjectId [449]
0.00 0.00 400/420232 XLogFlush [215]
0.00 0.00 401/420232 GetNewTransactionId [448]
0.00 0.00 401/420232 CommitTransaction [47]
0.00 0.00 403/420232 LockReleaseAll [345]
0.00 0.00 762/420232 StartBufferIO [439]
0.00 0.00 1460/420232 TransactionIdGetStatus [192]
0.00 0.00 2000/420232 ReadNewTransactionId [388]
0.00 0.00 2000/420232 GetSnapshotData [334]
0.00 0.00 2870/420232 WriteBuffer [346]
0.00 0.00 3204/420232 XLogInsert [43]
0.00 0.00 9499/420232 LockRelease [107]
0.01 0.00 18827/420232 LockAcquire [66]
0.01 0.00 30871/420232 ReceiveSharedInvalidMessages [196]
0.03 0.01 76888/420232 ReleaseBuffer [80]
0.04 0.01 110970/420232 ReadBufferInternal [31]
0.06 0.01 157987/420232 LockBuffer [55]
[44] 5.4 0.15 0.04 420232 LWLockAcquire [44]
0.04 0.00 29912/30040 IpcSemaphoreLock [144]
0.00 0.00 4376/7985 s_lock [596]
-----------------------------------------------
0.00 0.00 1/420708 InitBufferPool [517]
0.00 0.00 1/420708 shmem_exit [554]
0.00 0.00 1/420708 InitShmemIndex [524]
0.00 0.00 1/420708 InitBackendSharedInvalidationState [539]
0.00 0.00 1/420708 LockMethodTableInit [511]
0.00 0.00 4/420708 GetPageWithFreeSpace [516]
0.00 0.00 8/420708 WaitIO [523]
0.00 0.00 8/420708 RecordAndGetPageWithFreeSpace [501]
0.00 0.00 11/420708 ShmemInitStruct [494]
0.00 0.00 11/420708 XLogWrite [266]
0.00 0.00 14/420708 SetBufferCommitInfoNeedsSave [509]
0.00 0.00 128/420708 ProcSleep [450]
0.00 0.00 289/420708 BufferReplace [304]
0.00 0.00 400/420708 TransactionLogUpdate [260]
0.00 0.00 400/420708 GetNewObjectId [449]
0.00 0.00 401/420708 CommitTransaction [47]
0.00 0.00 402/420708 GetNewTransactionId [448]
0.00 0.00 403/420708 LockReleaseAll [345]
0.00 0.00 762/420708 ReadBufferInternal [31]
0.00 0.00 762/420708 TerminateBufferIO [455]
0.00 0.00 800/420708 XLogFlush [215]
0.00 0.00 1460/420708 TransactionIdGetStatus [192]
0.00 0.00 2000/420708 ReadNewTransactionId [388]
0.00 0.00 2000/420708 GetSnapshotData [334]
0.00 0.00 2870/420708 WriteBuffer [346]
0.00 0.00 3280/420708 XLogInsert [43]
0.00 0.00 9499/420708 LockRelease [107]
0.00 0.00 18827/420708 LockAcquire [66]
0.01 0.00 30871/420708 ReceiveSharedInvalidMessages [196]
0.02 0.00 76888/420708 ReleaseBuffer [80]
0.02 0.00 110218/420708 BufferAlloc [42]
0.03 0.00 157987/420708 LockBuffer [55]
[70] 2.6 0.09 0.00 420708 LWLockRelease [70]
0.00 0.00 29982/30112 IpcSemaphoreUnlock [571]
0.00 0.00 3604/7985 s_lock [596]
What I draw from this is:
1. The BufMgrLock is the principal source of LWLock contention, since it
is locked more than anything else. (The ReleaseBuffer,
ReadBufferInternal, and BufferAlloc calls are all to acquire/release
BufMgrLock. Although LockBuffer appears to numerically exceed these
calls, the LockBuffer operations are spread out over all the per-buffer
context locks, so it's unlikely that there's much contention for any one
buffer context lock.) It's too late in the 7.2 cycle to think about
redesigning bufmgr's interlocking but this ought to be high priority for
future work.
2. In this example, almost one in ten LWLockAcquire calls results in
blocking (calling IpcSemaphoreLock). That seems like a lot. I was
seeing much better results on a uniprocessor under essentially the
same test: one in a thousand LWLockAcquire calls blocked, not one in
ten. What's causing that discrepancy?
3. The amount of spinlock-level contention seems too high too. We
are calling s_lock about one out of every hundred LWLockAcquire or
LWLockRelease calls; the equivalent figure from a uniprocessor profile
is one in five thousand. Given the narrow window in which the spinlock
is held, how can the contention rate be so high?
Anyone see an explanation for these last two observations?
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Bruce Momjian | 2002-01-05 05:41:48 | Re: Some interesting results from tweaking spinlocks |
Previous Message | Bruce Momjian | 2002-01-05 05:30:09 | Re: Some interesting results from tweaking spinlocks |