From: | Andres Freund <andres(at)anarazel(dot)de> |
---|---|
To: | Konstantin Knizhnik <k(dot)knizhnik(at)postgrespro(dot)ru> |
Cc: | pgsql-hackers(at)lists(dot)postgresql(dot)org |
Subject: | Re: Improving connection scalability: GetSnapshotData() |
Date: | 2020-09-06 18:56:19 |
Message-ID: | 20200906185619.nde4ykyukqgrnrow@alap3.anarazel.de |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Hi,
On 2020-09-06 14:05:35 +0300, Konstantin Knizhnik wrote:
> On 04.09.2020 21:53, Andres Freund wrote:
> >
> > > May be it is because of more complex architecture of my server?
> > Think we'll need profiles to know...
>
> This is "perf top" of pgebch -c 100 -j 100 -M prepared -S
>
> 12.16% postgres [.] PinBuffer
> 11.92% postgres [.] LWLockAttemptLock
> 6.46% postgres [.] UnpinBuffer.constprop.11
> 6.03% postgres [.] LWLockRelease
> 3.14% postgres [.] BufferGetBlockNumber
> 3.04% postgres [.] ReadBuffer_common
> 2.13% [kernel] [k] _raw_spin_lock_irqsave
> 2.11% [kernel] [k] switch_mm_irqs_off
> 1.95% postgres [.] _bt_compare
>
>
> Looks like most of the time is pent in buffers locks.
Hm, that is interesting / odd. If you record a profile with call graphs
(e.g. --call-graph dwarf), where are all the LWLockAttemptLock calls
comming from?
I assume the machine you're talking about is an 8 socket machine?
What if you:
a) start postgres and pgbench with numactl --interleave=all
b) start postgres with numactl --interleave=0,1 --cpunodebind=0,1 --membind=0,1
in case you have 4 sockets, or 0,1,2,3 in case you have 8 sockets?
> And which pgbench database scale factor you have used?
200
Another thing you could try is to run 2-4 pgench instances in different
databases.
Greetings,
Andres Freund
From | Date | Subject | |
---|---|---|---|
Next Message | Peter Geoghegan | 2020-09-06 20:06:05 | Re: v13: show extended stats target in \d |
Previous Message | Andres Freund | 2020-09-06 18:52:14 | Re: Improving connection scalability: GetSnapshotData() |