From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | gmaxwell(at)gmail(dot)com |
Cc: | pgsql-hackers(at)postgreSQL(dot)org |
Subject: | Re: Spinlocks, yet again: analysis and proposed patches |
Date: | 2005-09-16 00:55:28 |
Message-ID: | 18653.1126832128@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Gregory Maxwell <gmaxwell(at)gmail(dot)com> writes:
> might be useful to align the structure so it always crosses two lines
> and measure the performance difference.. the delta could be basically
> attributed to the cache line bouncing since even one additional bounce
> would overwhelm the other performance effects from the changed
> alignment.
Good idea. I goosed the struct declaration and setup code to arrange
that the BufMappingLock's spinlock and the rest of its data were in
different cache lines instead of the same one. The results (still
on Red Hat's 4-way Opteron):
previous best code (slock-no-cmpb and spin-delay-2):
1 31s 2 42s 4 51s 8 100s
with LWLock padded to 32 bytes and correctly aligned:
1 31s 2 41s 4 51s 8 97s
with LWLocks 32 bytes, but deliberately misaligned:
1 30s 2 50s 4 102s 8 200s
There is no other reason than having to touch multiple cache lines for
the second and third cases to be different: the array indexing code
should be exactly the same.
These last numbers are pretty close to what I got from the
separated-spinlock patch:
1 31s 2 52s 4 106s 8 213s
So it seems there's no doubt that it's the doubled cache traffic that
was causing most of the problem there.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Neil Conway | 2005-09-16 02:06:57 | Re: Beta2 Wrap Up ... |
Previous Message | Gavin Sherry | 2005-09-16 00:53:40 | Re: Spinlocks, yet again: analysis and proposed patches |