From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Simon Riggs <simon(at)2ndquadrant(dot)com> |
Cc: | Noah Misch <noah(at)leadboat(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Alexey Klyukin <alexk(at)commandprompt(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Reducing overhead of frequent table locks |
Date: | 2011-05-25 15:15:53 |
Message-ID: | BANLkTimQxLDPkavZBveOSEHcPRti8gGZQg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Wed, May 25, 2011 at 8:56 AM, Simon Riggs <simon(at)2ndquadrant(dot)com> wrote:
> On Wed, May 25, 2011 at 1:44 PM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
>> On Wed, May 25, 2011 at 8:27 AM, Simon Riggs <simon(at)2ndquadrant(dot)com> wrote:
>>> I got a bit lost with the description of a potential solution. It
>>> seemed like you were unaware that there is a local lock and a shared
>>> lock table, maybe just me?
>>
>> No, I'm not unaware of the local lock table. The point of this
>> proposal is to avoid fighting over the LWLocks that protect the shared
>> hash table by allowing some locks to be taken without touching it.
>
> Yes, I got that. I just couldn't work out where mmap came in.
I don't think we were planning to use mmap anywhere.
>>> Design seemed relatively easy from there: put local lock table in
>>> shared memory for all procs. We then have a use_strong_lock at proc
>>> and at transaction level. Anybody that wants a strong lock first sets
>>> use_strong_lock at proc and transaction level, then copies all local
>>> lock data into shared lock table, double checking
>>> TransactionIdIsInProgress() each time. Then queues for lock using the
>>> now fully set up shared lock table. When transaction with strong lock
>>> completes we do not attempt to reset transaction level boolean, only
>>> at proc level, since DDL often occurs in groups and we want to avoid
>>> flip-flopping quickly between lock share states. Cleanup happens by
>>> regularly by bgwriter, perhaps every 10 seconds or so. All locks are
>>> still visible for pg_locks.
>>
>> I'm not following this...
>
> Which bit aren't you following? It's a design outline for how to
> implement, deliberately brief to allow a discussion of design
> alternatives.
Well, OK:
1. I don't think putting the local lock table in shared memory is a
good idea both for performance (keeping it uncontended has value) and
memory management (it would increase shared memory requirements quite
a bit). The design Noah and I were discussing upthread uses only a
small and fixed amount of shared memory for each backend, and
preserves the local lock table as an unshared resource.
2. You haven't explained the procedure for acquire a weak lock at all,
and in particular how a weak locker would be able to quickly determine
whether a conflicting strong lock was potentially present.
3. I don't understand the proposed procedure for acquiring a strong
lock either; in particular, I don't see why
TransactionIdIsInProgress() would be relevant. The lock manager
doesn't really do anything with transaction IDs now, and you haven't
offered any explanation of why that would be necessary or advisable.
4. You haven't explained what the transaction-level boolean would
actually do. It's not even clear whether you intend for that to be
kept in local or shared memory. It's also unclear what you intend for
bgwriter to clean up.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Bruce Momjian | 2011-05-25 15:47:31 | Re: Reducing overhead of frequent table locks |
Previous Message | Hitoshi Harada | 2011-05-25 15:13:12 | Re: Pull up aggregate subquery |