From: | Noah Misch <noah(at)leadboat(dot)com> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Alexey Klyukin <alexk(at)commandprompt(dot)com>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Reducing overhead of frequent table locks |
Date: | 2011-05-24 15:38:52 |
Message-ID: | 20110524153852.GC21833@tornado.gateway.2wire.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Tue, May 24, 2011 at 10:35:23AM -0400, Robert Haas wrote:
> On Tue, May 24, 2011 at 10:03 AM, Noah Misch <noah(at)leadboat(dot)com> wrote:
> > Let's see if I understand the risk better now: the new system will handle lock
> > load better, but when it does hit a limit, understanding why that happened
> > will be more difficult. ?Good point. ?No silver-bullet ideas come to mind for
> > avoiding that.
>
> The only idea I can think of is to try to impose some bounds. For
> example, suppose we track the total number of locks that the system
> can handle in the shared hash table. We try to maintain the system in
> a state where the number of locks that actually exist is less than
> that number, even though some of them may be stored elsewhere. You
> can imagine a system where backends grab a global mutex to reserve a
> certain number of slots, and store that in shared memory together with
> their fast-path list, but another backend which is desperate for space
> can go through and trim back reservations to actual usage.
Forcing artificial resource exhaustion is a high price to pay. I suppose it's
quite like disabling Linux memory overcommit, and some folks would like it.
> Another random idea for optimization: we could have a lock-free array
> with one entry per backend, indicating whether any fast-path locks are
> present. Before acquiring its first fast-path lock, a backend writes
> a 1 into that array and inserts a store fence. After releasing its
> last fast-path lock, it performs a store fence and writes a 0 into the
> array. Anyone who needs to grovel through all the per-backend
> fast-path arrays for whatever reason can perform a load fence and then
> scan the array. If I understand how this stuff works (and it's very
> possible that I don't), when the scanning backend sees a 0, it can be
> assured that the target backend has no fast-path locks and therefore
> doesn't need to acquire and release that LWLock or scan that fast-path
> array for entries.
I'm probably just missing something, but can't that conclusion become obsolete
arbitrarily quickly? What if the scanning backend sees a 0, and the subject
backend is currently sleeping just before it would have bumped that value? We
need to take the LWLock is there's any chance that the subject backend has not
yet seen the scanning backend's strong_lock_counts[] update.
nm
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Haas | 2011-05-24 15:41:59 | Re: Pull up aggregate subquery |
Previous Message | David Fetter | 2011-05-24 15:33:49 | Re: 9.2 schedule |