From: | Yura Sokolov <y(dot)sokolov(at)postgrespro(dot)ru> |
---|---|
To: | Andres Freund <andres(at)anarazel(dot)de> |
Cc: | Heikki Linnakangas <hlinnaka(at)iki(dot)fi>, "pgsql-hackers(at)lists(dot)postgresql(dot)org" <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
Subject: | Re: sinvaladt.c: remove msgnumLock, use atomic operations on maxMsgNum |
Date: | 2025-03-25 10:52:00 |
Message-ID: | cb8b4abb-d281-47ad-a791-b62f3522cde5@postgrespro.ru |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Good day, Andres
24.03.2025 16:08, Andres Freund wrote:
> On 2025-03-24 13:41:17 +0300, Yura Sokolov wrote:
>> 21.03.2025 19:33, Andres Freund wrote:
>>> I'd also like to know a bit more about the motivation here - I can easily
>>> believe that you hit contention around the shared inval queue, but I
find it
>>> somewhat hard to believe that a spinlock that's acquired *once* per batch
>>> ("quantum"), covering a single read/write, is going to be the bottleneck,
>>> rather than the much longer held LWLock, that protects iterating over all
>>> procs.
>>>
>>> Have you verified that this actually addresses the performance issue?
>>
>> Problem on this spinlock were observed at least by two independent technical
>> supports. First, some friendly vendor company shared the idea to remove it.
>> We don't know exactly their situation. But I suppose it was quite similar
>> to situation out tech support investigated at our client some months later:
>>
>> (Cite from tech support report:)
>>> Almost 20% of CPU time is spend at backtraces like:
>> 4b0d2d s_lock (/opt/pgpro/ent-15/bin/postgres)
>> 49c847 SIGetDataEntries
>> 49bf94 ReceiveSharedInvalidMessages
>> 4a14ba LockRelationOid
>> 1671f4 relation_open
>> 1de1cd table_open
>> 5e82aa RelationGetStatExtList
>> 402a01 get_relation_statistics (inlined)
>> 402a01 get_relation_info
>> 407a9e build_simple_rel
>> 3daa1d add_base_rels_to_query
>> 3daa1d add_base_rels_to_query
>> 3dd92b query_planner
>>
>>
>> Client has many NUMA-nodes in single machine, and software actively
>> generates invalidation messages (probably, due active usage of temporary
>> tables).
>>
>> Since, backtrace is quite obvious and ends at s_lock, the patch have to
help.
>
> I don't believe we have the whole story here. It just doesn't seem plausible
> that, with the current code, the spinlock in SIGetDataEntries() would be the
> bottleneck, rather than contention on the lwlock. The spinlock just covers a
> *single read*. Unless pgpro has modified the relevant code?
>
> One possible explanation for why the spinlock shows up so badly is that it is
> due to false sharing. Note that SiSeg->msgnumLock and the start of
> SiSeg->buffer[] are on the same cache line.
>
> How was this "Almost 20% of CPU time is spend at backtraces like" determined?
Excuse me I didn't attached flamegraph collected by our tech support from
client's server during peak of the problem. So I attach it now.
If you open it in browser and search for "SIGetDataEntries", you'll see it
consumes 18.4%. It is not single large bar. Instead there are dozens of
calls to SIGetDataEntries, and every one spend almost all its time in
s_lock. If you search for s_lock, it consumes 16.9%, and almost every call
to s_lock is from SIGetDataEntries.
Looks like, we call to ReceiveSharedInvalidMessages
(AcceptInvalidationMessages, actually) too frequently during planing. And
if there are large stream of invalidation messages, SIGetDataEntries have
some work very frequently. Therefore many backends, which plans their
queries at the moment, start to fight for msgNumLock.
If ReceiveSharedInvalidMessages (and SIGetDataEntries by it) were called
rarely, then you conclusion were right: taking spinlock around just read of
one variable before processing large batch of messages looks to not be
source of problem. But since it is called very frequently, and stream of
messages is high, "there is always few new messages".
As I've said, it is most probably due to use of famous 1C software, which
uses a lot of temporary tables. So it generates high amount of invalidation
messages. We've patched pgpro postgres to NOT SEND most of invalidation
messages generated by temporary tables, but it is difficult to not send all
of such.
--
regards
Yura Sokolov aka funny-falcon
Attachment | Content-Type | Size |
---|---|---|
perf20250123_1120.lst.bind.svg.gz | application/gzip | 343.4 KB |
From | Date | Subject | |
---|---|---|---|
Next Message | Alexander Korotkov | 2025-03-25 10:55:50 | Re: Add semi-join pushdown to postgres_fdw |
Previous Message | Etsuro Fujita | 2025-03-25 10:44:43 | Re: Options to control remote transactions’ access/deferrable modes in postgres_fdw |