From: | Kyotaro Horiguchi <horikyota(dot)ntt(at)gmail(dot)com> |
---|---|
To: | x4mmm(at)yandex-team(dot)ru |
Cc: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: MultiXact\SLRU buffers configuration |
Date: | 2020-05-14 01:25:26 |
Message-ID: | 20200514.102526.1602501255796880628.horikyota.ntt@gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
At Wed, 13 May 2020 23:08:37 +0500, "Andrey M. Borodin" <x4mmm(at)yandex-team(dot)ru> wrote in
>
>
> > 11 мая 2020 г., в 16:17, Andrey M. Borodin <x4mmm(at)yandex-team(dot)ru> написал(а):
> >
> > I've went ahead and created 3 patches:
> > 1. Configurable SLRU buffer sizes for MultiXacOffsets and MultiXactMembers
> > 2. Reduce locking level to shared on read of MultiXactId members
> > 3. Configurable cache size
>
> I'm looking more at MultiXact and it seems to me that we have a race condition there.
>
> When we create a new MultiXact we do:
> 1. Generate new MultiXactId under MultiXactGenLock
> 2. Record new mxid with members and offset to WAL
> 3. Write offset to SLRU under MultiXactOffsetControlLock
> 4. Write members to SLRU under MultiXactMemberControlLock
But, don't we hold exclusive lock on the buffer through all the steps
above?
> When we read MultiXact we do:
> 1. Retrieve offset by mxid from SLRU under MultiXactOffsetControlLock
> 2. If offset is 0 - it's not filled in at step 4 of previous algorithm, we sleep and goto 1
> 3. Retrieve members from SLRU under MultiXactMemberControlLock
> 4. ..... what we do if there are just zeroes because step 4 is not executed yet? Nothing, return empty members list.
So transactions never see such incomplete mxids, I believe.
> What am I missing?
regards.
--
Kyotaro Horiguchi
NTT Open Source Software Center
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2020-05-14 01:29:25 | Re: Our naming of wait events is a disaster. |
Previous Message | Gurjeet Singh | 2020-05-14 01:24:39 | Re: JSON output from psql |