From: | Andres Freund <andres(at)anarazel(dot)de> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | "Tsunakawa, Takayuki" <tsunakawa(dot)takay(at)jp(dot)fujitsu(dot)com>, "pgsql-hackers(at)lists(dot)postgresql(dot)org" <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
Subject: | Re: Speed up transaction completion faster after many relations are accessed in a transaction |
Date: | 2019-02-19 00:16:39 |
Message-ID: | 20190219001639.ft7kxir2iz644alf@alap3.anarazel.de |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Hi,
On 2019-02-18 18:42:32 -0500, Tom Lane wrote:
> "Tsunakawa, Takayuki" <tsunakawa(dot)takay(at)jp(dot)fujitsu(dot)com> writes:
> > The attached patch speeds up transaction completion when any prior transaction accessed many relations in the same session.
>
> Hm. Putting a list header for a purely-local data structure into shared
> memory seems quite ugly. Isn't there a better place to keep that?
Yea, I think it'd be just as fine to store that in a static
variable (best defined directly besides LockMethodLocalHash).
(Btw, I'd be entirely unsurprised if moving away from a dynahash for
LockMethodLocalHash would be beneficial)
> Do we really want a dlist here at all? I'm concerned that bloating
> LOCALLOCK will cost us when there are many locks involved. This patch
> increases the size of LOCALLOCK by 25% if I counted right, which does
> not seem like a negligible penalty.
It's currently
struct LOCALLOCK {
LOCALLOCKTAG tag; /* 0 20 */
/* XXX 4 bytes hole, try to pack */
LOCK * lock; /* 24 8 */
PROCLOCK * proclock; /* 32 8 */
uint32 hashcode; /* 40 4 */
/* XXX 4 bytes hole, try to pack */
int64 nLocks; /* 48 8 */
_Bool holdsStrongLockCount; /* 56 1 */
_Bool lockCleared; /* 57 1 */
/* XXX 2 bytes hole, try to pack */
int numLockOwners; /* 60 4 */
/* --- cacheline 1 boundary (64 bytes) --- */
int maxLockOwners; /* 64 4 */
/* XXX 4 bytes hole, try to pack */
LOCALLOCKOWNER * lockOwners; /* 72 8 */
/* size: 80, cachelines: 2, members: 10 */
/* sum members: 66, holes: 4, sum holes: 14 */
/* last cacheline: 16 bytes */
};
seems we could trivially squeeze most of the bytes for a dlist node out
of padding.
> My own thought about how to improve this situation was just to destroy
> and recreate LockMethodLocalHash at transaction end (or start)
> if its size exceeded $some-value. Leaving it permanently bloated seems
> like possibly a bad idea, even if we get rid of all the hash_seq_searches
> on it.
OTOH, that'll force constant incremental resizing of the hashtable, for
workloads that regularly need a lot of locks. And I'd assume in most
cases if one transaction needs a lot of locks it's quite likely that
future ones will need a lot of locks, too.
Greetings,
Andres Freund
From | Date | Subject | |
---|---|---|---|
Next Message | David Rowley | 2019-02-19 00:18:48 | Re: Delay locking partitions during query execution |
Previous Message | Tom Lane | 2019-02-19 00:13:31 | Re: Speed up transaction completion faster after many relations are accessed in a transaction |