From: | Andres Freund <andres(at)anarazel(dot)de> |
---|---|
To: | Alexander Korotkov <a(dot)korotkov(at)postgrespro(dot)ru> |
Cc: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, YUriy Zhuravlev <u(dot)zhuravlev(at)postgrespro(dot)ru>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Move PinBuffer and UnpinBuffer to atomics |
Date: | 2015-12-11 13:04:13 |
Message-ID: | 20151211130413.GO14789@awork2.anarazel.de |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 2015-12-11 15:56:46 +0300, Alexander Korotkov wrote:
> On Thu, Dec 10, 2015 at 9:26 AM, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
> wrote:
> >>>> We did see this on big Intel machine in practice. pgbench -S gets
> >>>> shared ProcArrayLock very frequently. Since some number of connections is
> >>>> achieved, new connections hangs on getting exclusive ProcArrayLock. I think
> >>>> we could do some workaround for this problem. For instance, when exclusive
> >>>> lock waiter have some timeout it could set some special bit which prevents
> >>>> others to get new shared locks.
> > Ye thats right, but I think in general the solution to this problem
> > should be don't let any Exclusive locker to starve and still allow
> > as many shared lockers as possible. I think here it is important
> > how we define starving, should it be based on time or something
> > else? I find timer based solution somewhat less suitable, but may
> > be it is okay, if there is no other better way.
> >
>
> Yes, we probably should find something better.
> Another way could be to
> >>> check if the Exclusive locker needs to go for repeated wait for a
> >>> couple of times, then we can set such a bit.
> >>>
> >>
> >> I'm not sure what do you mean by repeated wait. Do you mean exclusive
> >> locker was waked twice up by timeout?
> >>
> >
> > I mean to say once the Exclusive locker is woken up, it again
> > re-tries to acquire the lock as it does today, but if it finds that the
> > number of retries is greater than certain threshold (let us say 10),
> > then we sit the bit.
> >
>
> Yes, there is a cycle with retries in LWLockAcquire function. The case of
> retry is when waiter is waked up, but someone other steal the lock before
> him. Lock waiter is waked up by lock releaser only when lock becomes free.
> But in the case of high concurrency for shared lock, it almost never
> becomes free. So, exclusive locker would be never waked up. I'm pretty sure
> this happens on big Intel machine while we do the benchmark. So, relying on
> number of retries wouldn't work in this case.
> I'll do the tests to verify if retries happens in our case.
I seriously doubt that making lwlocks fairer is the right way to go
here. In my testing the "unfairness" is essential to performance - the
number of context switches otherwise increases massively.
I think in this case its better to work on making the lock less
contended, rather than making micro-optimizations around the locking
behaviour.
Andres
From | Date | Subject | |
---|---|---|---|
Next Message | Greg Stark | 2015-12-11 13:27:37 | Re: Making tab-complete.c easier to maintain |
Previous Message | Alexander Korotkov | 2015-12-11 12:56:46 | Re: Move PinBuffer and UnpinBuffer to atomics |