From: | Noah Misch <noah(at)leadboat(dot)com> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Alexey Klyukin <alexk(at)commandprompt(dot)com>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Reducing overhead of frequent table locks |
Date: | 2011-05-24 14:03:04 |
Message-ID: | 20110524140304.GA21833@tornado.gateway.2wire.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Tue, May 24, 2011 at 08:53:11AM -0400, Robert Haas wrote:
> On Tue, May 24, 2011 at 5:07 AM, Noah Misch <noah(at)leadboat(dot)com> wrote:
> > This drops the part about only transferring fast-path entries once when a
> > strong_lock_counts cell transitions from zero to one.
>
> Right: that's because I don't think that's what we want to do. I
> don't think we want to transfer all per-backend locks to the shared
> hash table as soon as anyone attempts to acquire a strong lock;
> instead, I think we want to transfer only those fast-path locks which
> have the same locktag as the strong lock someone is attempting to
> acquire. If we do that, then it doesn't matter whether the
> strong_lock_counts[] cell is transitioning from 0 to 1 or from 6 to 7:
> we still have to check for strong locks with that particular locktag.
Oh, I see. I was envisioning that you'd transfer all locks associated with
the strong_lock_counts cell; that is, all the locks that would now go directly
to the global lock table when requested going forward. Transferring only
exact matches seems fine too, and then I agree with your other conclusions.
> > Granted, that itself
> > requires some yet-undiscussed locking. ?For that matter, we can't have
> > multiple strong lockers completing transfers on the same cell in parallel.
> > Perhaps add a FastPathTransferLock, or an array of per-cell locks, that each
> > strong locker holds for that entire "if" body and while decrementing the
> > strong_lock_counts cell at lock release.
>
> I was imagining that the per-backend LWLock would protect the list of
> fast-path locks. So to transfer locks, you would acquire the
> per-backend LWLock for the backend which has the lock, and then the
> lock manager partition LWLock, and then perform the transfer.
I see later in your description that the transferer will delete each fast-path
lock after transferring it. Given that, this does sound adequate.
> > As far as the level of detail of this pseudocode goes, there's no need to hold
> > the per-backend LWLock while transferring the fast-path entries. ?You just
> > need to hold it sometime between bumping strong_lock_counts and transferring
> > the backend's locks. ?This ensures that, for example, the backend is not
> > sleeping in the middle of a fast-path lock acquisition for the whole duration
> > of this code.
>
> See above; I'm lost.
It wasn't a particularly useful point.
> > To validate the locking at this level of detail, I think we need to sketch the
> > unlock protocol, too. ?On each strong lock release, we'll decrement the
> > strong_lock_counts cell. ?No particular interlock with fast-path lockers
> > should be needed; a stray AccessShareLock needlessly making it into the global
> > lock table is no problem. ?As mentioned above, we _will_ need an interlock
> > with lock transfer operations. ?How will transferred fast-path locks get
> > removed from the global lock table? ?Presumably, the original fast-path locker
> > should do so at transaction end; anything else would contort the life cycle.
> > Then add a way for the backend to know which locks had been transferred as
> > well as an interlock against concurrent transfer operations. ?Maybe that's
> > all.
>
> I'm thinking that the backend can note, in its local-lock table,
> whether it originally acquired a lock via the fast-path or not. Any
> lock not originally acquired via the fast-path will be released just
> as now. For any lock that WAS originally acquired via the fast-path,
> we'll take our own per-backend lwlock, which protects the fast-path
> queue, and scan the fast-path queue for a matching entry. If none is
> found, then we know the lock was transferred, so release the
> per-backend lwlock and do it the regular way (take lock manager
> partition lock, etc.).
Sounds good.
> > To put it another way: the current system is fair; the chance of hitting lock
> > exhaustion is independent of lock level. ?The new system would be unfair; lock
> > exhaustion is much more likely to appear for a > ShareUpdateExclusiveLock
> > acquisition, through no fault of that transaction. ?I agree this isn't ideal,
> > but it doesn't look to me like an unacceptable weakness. ?Making lock slots
> > first-come, first-served is inherently unfair; we're not at all set up to
> > justly arbitrate between mutually-hostile lockers competing for slots. ?The
> > overall situation will get better, not worse, for the admin who wishes to
> > defend against hostile unprivileged users attempting a lock table DOS.
>
> Well, it's certainly true that the proposed system is far less likely
> to bomb out trying to acquire an AccessShareLock than what we have
> today, since in the common case the AccessShareLock doesn't use up any
> shared resources. And that should make a lot of people happy. But as
> to the bad scenario, one needn't presume that the lockers are hostile
> - it may just be that the system is running on the edge of a full lock
> table. In the worst case, someone wanting a strong lock on a table
> may end up transferring a hundred or more locks (up to one per
> backend) into that table. Even if they all fit and the strong locker
> gets his lock, it may now happen that the space is just about
> exhausted and other transactions start rolling back, apparently at
> random.
>
> Or, even more pathologically, one backend grabs a strong lock on table
> X, but it just so happens that there is another table Y in the same
> lock partition which is highly trafficked but, as all of the locks
> involved are weak, uncontended. Now that strong_lock_counts[] is
> non-zero for that partition, all those locks start going into the main
> lock manager table. Performance will drop, which isn't great but we
> can live with it. But maybe all the locks don't fit. Now we have a
> situation in which, due to one backend acquiring a strong lock on
> table A, a bunch of other backends are unable to obtain weak locks on
> table B, and transactions start rolling back all over the place.
>
> Now maybe you can argue that this scenario is sufficiently unlikely in
> practice that we shouldn't worry about it, but if it does happen the
> DBA will be incredibly confused, because an apparently innocuous
> operation on one table will have resulted in rollbacks acquiring locks
> on an apparently unrelated table. Maybe you want to argue that's
> sufficiently rare that we shouldn't worry about it, but the
> unpleasantness factor seems pretty high to me.
Let's see if I understand the risk better now: the new system will handle lock
load better, but when it does hit a limit, understanding why that happened
will be more difficult. Good point. No silver-bullet ideas come to mind for
avoiding that. A lock transfer operation could abort if it sees itself
burning all the shared memory, but I'd be pessimistic about identifying a
concrete heuristic not subject to its own confusing corner cases. Overall,
the original outcome doesn't sound especially confusing to me. YMMV.
Will the pg_locks view scan fast-path lock tables? If not, we probably need
another view that does. We can then encourage administrators to monitor for
fast-path lock counts that get high relative to shared memory capacity.
Thanks,
nm
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2011-05-24 14:10:34 | Re: Operator families vs. casts |
Previous Message | Robert Haas | 2011-05-24 12:53:11 | Re: Reducing overhead of frequent table locks |