From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Noah Misch <noah(at)leadboat(dot)com> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Andres Freund <andres(at)anarazel(dot)de>, Ashutosh Sharma <ashu(dot)coek88(at)gmail(dot)com>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Performance degradation in commit 6150a1b0 |
Date: | 2016-04-13 03:40:43 |
Message-ID: | CA+TgmobpHAqsOeHc-ooRsjzTKw1H4s4P1VBtwh1KkKO+6Mp8_Q@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Tue, Apr 12, 2016 at 10:30 PM, Noah Misch <noah(at)leadboat(dot)com> wrote:
> That sounds like this open item is ready for CLOSE_WAIT status; is it?
I just retested this on power2. Here are the results. I retested
3fed4174 and 6150a1b0 plus master as of deb71fa9. 5-minute pgbench -S
runs, scale factor 300, with predictable prewarming to minimize
variation, as well as numactl --interleave. Each result is a median
of three.
1 client: 3fed4174 = 13701.014931, 6150a1b0 = 13669.626916, master =
19685.571089
8 clients: 3fed4174 = 126676.357079, 6150a1b0 = 125239.911105, master
= 122940.079404
32 clients: 3fed4174 = 323989.685428, 6150a1b0 = 338638.095126, master
= 333656.861590
64 clients: 3fed4174 = 495434.372578, 6150a1b0 = 457794.475129, master
= 493034.922791
128 clients: 3fed4174 = 376412.090366, 6150a1b0 = 363157.294391,
master = 625498.280370
On this test 8, 32, and 64 clients are coming out about the same as
3fed4174, but 1 client and 128 clients are dramatically improved with
current master. The 1-client result is a lot more surprising than the
128-client result; I don't know what's going on there. But anyway I
don't see a regression here.
So, yes, I would say this should go to CLOSE_WAIT at this point,
unless Amit or somebody else turns up further evidence of a continuing
issue here.
Random points of possible interest:
1. During a 128-client run, top shows about 45% user time, 10% system
time, 45% idle.
2. About 3 minutes into a 128-client run, perf looks like this
(substantially abridged):
3.55% postgres postgres [.] GetSnapshotData
2.15% postgres postgres [.] LWLockAttemptLock
|--32.82%-- LockBuffer
| |--48.59%-- _bt_relandgetbuf
| |--44.07%-- _bt_getbuf
|--29.81%-- ReadBuffer_common
|--23.88%-- GetSnapshotData
|--5.30%-- LockAcquireExtended
2.12% postgres postgres [.] LWLockRelease
2.02% postgres postgres [.] _bt_compare
1.88% postgres postgres [.]
hash_search_with_hash_value
|--47.21%-- BufTableLookup
|--10.93%-- LockAcquireExtended
|--5.43%-- GetPortalByName
|--5.21%-- ReadBuffer_common
|--4.68%-- RelationIdGetRelation
1.87% postgres postgres [.] AllocSetAlloc
1.42% postgres postgres [.] PinBuffer.isra.3
0.96% postgres libc-2.17.so [.] __memcpy_power7
0.89% postgres postgres [.]
UnpinBuffer.constprop.7
0.80% postgres postgres [.] PostgresMain
0.80% postgres postgres [.]
pg_encoding_mbcliplen
0.71% postgres postgres [.] hash_any
0.62% postgres postgres [.] AllocSetFree
0.59% postgres postgres [.] palloc
0.57% postgres libc-2.17.so [.] _int_free
A context-switch profile, somewhat amazingly, shows no context
switches for anything other than waiting on client read, implying that
performance is entirely constrained by memory bandwidth and CPU speed,
not lock contention.
> If someone does retest this, it would be informative to see how the system
> performs with 6150a1b0 reverted. Your testing showed performance of 6150a1b0
> alone and of 6150a1b0 plus predecessors of 008608b and 4835458. I don't
> recall seeing figures for 008608b + 4835458 - 6150a1b0, though.
That revert isn't trivial: even what exactly that would mean at this
point is somewhat subjective. I'm also not sure there is much point.
6150a1b08a9fe7ead2b25240be46dddeae9d98e1 was written in such a way
that only platforms with single-byte spinlocks were going to have a
BufferDesc that fits into 64 bytes, which in retrospect was a bit
short-sighted. Because the changes that were made to get it back down
to 64 bytes might also have other performance-relevant consequences,
it's a bit hard to be sure that that was the precise thing that caused
the regression. And of course there was a fury of other commits going
in at the same time, some even on related topics, which further adds
to the difficulty of pinpointing this precisely. All that is a bit
unfortunate in some sense, but I think we're just going to have to
keep moving forward and hope for the best.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Michael Paquier | 2016-04-13 05:08:25 | Re: Optimization for updating foreign tables in Postgres FDW |
Previous Message | Pavel Stehule | 2016-04-13 03:39:00 | Re: raw output from copy |