From: | "Jignesh K(dot) Shah" <J(dot)K(dot)Shah(at)Sun(dot)COM> |
---|---|
To: | Scott Carey <scott(at)richrelevance(dot)com> |
Cc: | Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov>, "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: Proposal of tunable fix for scalability of 8.4 |
Date: | 2009-03-12 14:57:04 |
Message-ID: | 49B922C0.6050700@sun.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On 03/11/09 22:01, Scott Carey wrote:
> On 3/11/09 3:27 PM, "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov> wrote:
>
>
> I'm a lot more interested in what's happening between 60 and 180 than
> over 1000, personally. If there was a RAID involved, I'd put it down
> to better use of the numerous spindles, but when it's all in RAM it
> makes no sense.
>
> If there is enough lock contention and a common lock case is a short
> lived shared lock, it makes perfect sense sense. Fewer readers are
> blocked waiting on writers at any given time. Readers can 'cut' in
> line ahead of writers within a certain scope (only up to the number
> waiting at the time a shared lock is at the head of the queue).
> Essentially this clumps up shared and exclusive locks into larger
> streaks, and allows for higher shared lock throughput.
> Exclusive locks may be delayed, but will NOT be starved, since on the
> next iteration, a streak of exclusive locks will occur first in the
> list and they will all process before any more shared locks can go.
>
> This will even help in on a single CPU system if it is read dominated,
> lowering read latency and slightly increasing write latency.
>
> If you want to make this more fair, instead of freeing all shared
> locks, limit the count to some number, such as the number of CPU
> cores. Perhaps rather than wake-up-all-waiters=true, the parameter
> can be an integer representing how many shared locks can be freed at
> once if an exclusive lock is encountered.
>
>
Well I am waking up not just shared but shared and exclusives.. However
i like your idea of waking up the next N waiters where N matches the
number of cpus available. In my case it is 64 so yes this works well
since the idea being of all the 64 waiters running right now one will be
able to lock the next lock immediately and hence there are no cycles
wasted where nobody gets a lock which is often the case when you say
wake up only 1 waiter and hope that the process is on the CPU (which in
my case it is 64 processes) and it is able to acquire the lock.. The
probability of acquiring the lock within the next few cycles is much
less for only 1 waiter than giving chance to 64 such processes and
then let them fight based on who is already on CPU and acquire the
lock. That way the period where nobody has a lock is reduced and that
helps to cut out "artifact" idle time on the system.
As soon as I get more "cycles" I will try variations of it but it would
help if others can try it out in their own environments to see if it
helps their instances.
-Jignesh
From | Date | Subject | |
---|---|---|---|
Next Message | Kevin Grittner | 2009-03-12 15:13:24 | Re: Proposal of tunable fix for scalability of 8.4 |
Previous Message | Kevin Grittner | 2009-03-12 14:07:39 | Re: Proposal of tunable fix for scalability of 8.4 |