From: | "Mark Cave-Ayland" <m(dot)cave-ayland(at)webbased(dot)co(dot)uk> |
---|---|
To: | "'Tom Lane'" <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | <pgsql-patches(at)postgresql(dot)org> |
Subject: | Re: WIP: bufmgr rewrite per recent discussions |
Date: | 2005-02-22 10:14:08 |
Message-ID: | 9EB50F1A91413F4FA63019487FCD251D11312C@WEBBASEDDC.webbasedltd.local |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-patches |
> -----Original Message-----
> From: pgsql-patches-owner(at)postgresql(dot)org
> [mailto:pgsql-patches-owner(at)postgresql(dot)org] On Behalf Of Tom Lane
> Sent: 17 February 2005 15:46
> To: Mark Cave-Ayland
> Cc: pgsql-patches(at)postgresql(dot)org
> Subject: Re: [PATCHES] WIP: bufmgr rewrite per recent discussions
(cut)
> >> 3. Pad the LWLock struct (in
> >> src/backend/storage/lmgr/lwlock.c) to some power of 2 up to
> >> 128 bytes --- same issue of space wasted versus cross-lock
> contention.
>
> > Having seen the results above, is it still worth looking at this?
>
> Yeah, probably, because there are other possible contention
> sources besides buffers that might be alleviated by padding
> LWLocks. For instance the buffer manager global locks and
> the LockMgrLock are probably all in the same cache line at the moment.
Hi Tom,
Here are the results from the LWLock test. Firstly here are the results with
your second patch with no modifications as a refresher:
PATCH #2 No modifications
1000 10000
100000
204.909702 205.01051 345.098727 345.411606
375.812059 376.37741
195.100496 195.197463 348.791481 349.111363
314.718619 315.139878
199.637965 199.735195 313.561366 313.803225
365.061177 365.666103
195.935529 196.029082 325.893744 326.171754
370.040623 370.625072
196.661374 196.756481 314.468751 314.711517
319.643145 320.099164
Mean:
198.4490132 198.5457462 329.5628138 329.841893
349.0551246 349.5815254
Here are the results with ALIGNOF_BUFFER=128 and padding LWLock to 64 bytes:
PATCH #2 with ALIGNOF_BUFFER = 128 and LWLock padded to 64 bytes
1000 10000
100000
199.672932 199.768756 307.051571 307.299088
367.394745 368.016266
196.443585 196.532912 344.898219 345.204228
375.300921 375.979186
191.098411 191.185807 329.485633 329.77679
305.413304 305.841889
201.110132 201.210049 314.219784 314.476356
314.03306 314.477869
196.615748 196.706032 337.315295 337.62437
370.537538 371.16593
Mean:
196.9881616 197.0807112 326.5941004 326.8761664
346.5359136 347.096228
And finally here are the results with ALIGNOF_BUFFER = 128 and LWLock padded
to 128 bytes:
PATCH #2 with ALIGNOF_BUFFER = 128 and LWLock padded to 128 bytes
1000 10000
100000
195.357405 195.449704 346.916069 347.235895
373.354775 373.934842
190.428061 190.515077 323.932436 324.211975
361.908206 362.476886
206.059573 206.159472 338.288825 338.590642
306.22198 306.618689
195.336711 195.427757 309.316534 309.56603
305.295391 305.695336
188.896205 188.983969 322.889651 323.245394
377.673313 378.269907
Mean:
195.215591 195.3071958 328.268703 328.5699872
344.890733 345.399132
So again I don't see any performance improvement. However, I did manage to
find out what was causing the 'freezing' I mentioned in my earlier email. By
temporarily turning fsync=false in postgresql.conf, the freezing goes away,
so I'm guessing it's something to do with disk/kernel caches and buffering.
Since the drives are software RAID 1 with ext3 I guess that the server is
running I/O bound under load which is perhaps why padding the data
structures doesn't seem to make much difference. I'm not sure whether this
makes the test results particularly useful though :(
Kind regards,
Mark.
------------------------
WebBased Ltd
South West Technology Centre
Tamar Science Park
Plymouth
PL6 8BT
T: +44 (0)1752 791021
F: +44 (0)1752 791023
W: http://www.webbased.co.uk
From | Date | Subject | |
---|---|---|---|
Next Message | Neil Conway | 2005-02-22 10:28:01 | Re: WIP: pl/pgsql cleanup |
Previous Message | Christopher Kings-Lynne | 2005-02-22 09:25:03 | Re: Change < to -f in examples with input files |