From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Noah Misch <noah(at)leadboat(dot)com> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Alexey Klyukin <alexk(at)commandprompt(dot)com>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Reducing overhead of frequent table locks |
Date: | 2011-05-27 20:55:07 |
Message-ID: | BANLkTinD2=Ak2x7_U7eQL=8Ne=THAFJ44g@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Tue, May 24, 2011 at 10:03 AM, Noah Misch <noah(at)leadboat(dot)com> wrote:
> On Tue, May 24, 2011 at 08:53:11AM -0400, Robert Haas wrote:
>> On Tue, May 24, 2011 at 5:07 AM, Noah Misch <noah(at)leadboat(dot)com> wrote:
>> > This drops the part about only transferring fast-path entries once when a
>> > strong_lock_counts cell transitions from zero to one.
>>
>> Right: that's because I don't think that's what we want to do. I
>> don't think we want to transfer all per-backend locks to the shared
>> hash table as soon as anyone attempts to acquire a strong lock;
>> instead, I think we want to transfer only those fast-path locks which
>> have the same locktag as the strong lock someone is attempting to
>> acquire. If we do that, then it doesn't matter whether the
>> strong_lock_counts[] cell is transitioning from 0 to 1 or from 6 to 7:
>> we still have to check for strong locks with that particular locktag.
>
> Oh, I see. I was envisioning that you'd transfer all locks associated with
> the strong_lock_counts cell; that is, all the locks that would now go directly
> to the global lock table when requested going forward. Transferring only
> exact matches seems fine too, and then I agree with your other conclusions.
I took a crack at implementing this and ran into difficulties.
Actually, I haven't gotten to the point of actually testing whether it
works, but I'm worried about a possible problem with the algorithm.
When a strong lock is taken or released, we have to increment or
decrement strong_lock_counts[fasthashpartition]. Here's the question:
is that atomic? In other words, suppose that strong_lock_counts[42]
starts out at 0, and two backends both do ++strong_lock_counts[42].
Are we guaranteed to end up with "2" in that memory location or might
we unluckily end up with "1"? I think the latter is possible... and
some guard is needed to make sure that doesn't happen.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2011-05-27 21:01:51 | Re: Reducing overhead of frequent table locks |
Previous Message | Alvaro Herrera | 2011-05-27 20:43:28 | storing TZ along timestamps |