From: | Andres Freund <andres(at)2ndquadrant(dot)com> |
---|---|
To: | Heikki Linnakangas <hlinnakangas(at)vmware(dot)com> |
Cc: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Wait free LW_SHARED acquisition |
Date: | 2013-09-27 08:42:48 |
Message-ID: | 20130927084248.GD5588@awork2.anarazel.de |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 2013-09-27 11:11:56 +0300, Heikki Linnakangas wrote:
> On 27.09.2013 10:21, Andres Freund wrote:
> >Heaps better. In the case causing this investigation lots of the pages
> >with hot spinlocks were the simply the same ones over and over again,
> >partitioning the lockspace won't help much there.
> >That's not exactly an uncommon scenario since often enough there's a
> >small amount of data hit very frequently and lots more that is accessed
> >only infrequently. E.g. recently inserted data and such tends to be very hot.
>
> I see. So if only a few buffers are really hot, I'm assuming the problem
> isn't just the buffer partition lock, but also the spinlock on the buffer
> header, and the buffer content lwlock. Yeah, improving LWLocks would be a
> nice wholesale solution to that. I don't see any fundamental flaw in your
> algorithm. Nevertheless, I'm going to throw in a couple of other ideas:
>
> * Keep a small 4-5 entry cache of buffer lookups in each backend of most
> recently accessed buffers. Before searching for a buffer in the
> SharedBufHash, check the local cache.
I thought about that as well, but you'd either need to revalidate after
pinning the buffers, or keep them pinned.
I had a very hacky implementation of that, but it just make the buffer
content locks the top profile spots. Similar issue there.
It might be worthwile to do this nonetheless - lock xadd; certainly
isn't cheap, even due it's cheaper than a full spinnlock - but it's not
trivial.
> * To pin a buffer, use an atomic fetch-and-add instruction to increase the
> refcount. PinBuffer() also increases usage_count, but you could do that
> without holding a lock; it doesn't need to be accurate.
Yes, Pin/UnpinBuffer() are the primary contention points after this
patch. I want to tackle them, but that seems like a separate thing to
do.
I think we should be able to get rid of most or even all LockBufHdr()
calls by
a) introducing PinBufferInternal() which increase pins but not
usage count using an atomic increment. That can replace locking headers
in many cases.
b) make PinBuffer() increment pin and usagecount using a single 64bit
atomic add if available and fit flags in there as well. Then back off
the usagecount if it's too high or even wraps around, that doesn't hurt
much, we're pinned in that moment.
> One problem with your patch is going to be to make it also work without the
> CAS and fetch-and-add instructions. Those are probably present in all the
> architectures we support, but it'll take some effort to get the
> architecture-specific code done. Until it's all done, it would be good to be
> able to fall back to plain spinlocks, which we already have. Also, when
> someone ports PostgreSQL to a new architecture in the future, it would be
> helpful if you wouldn't need to write all the architecture-specific code
> immediately to get it to compile.
I think most recent compilers have intrinsics for stuff for operations
like that. I quite like the API provided by gcc for this kind of stuff,
I think we should model an internal wrapper API similarly. I don't see
any new platforming coming that has a compiler without intrinsics?
But yes, you're right, I think we need a spinlock based fallback for
now. Even if it's just because nobody of us can verify the
implementations on the more obsolete platforms we claim to support.I
just didn't see it as a priority in the PoC.
> Did you benchmark your patch against the compare-and-swap patch I posted
> earlier? (http://www.postgresql.org/message-id/519A3587.80603@vmware.com)
> Just on a theoretical level, I would assume your patch to scale better since
> it uses fetch-and-add instead of compare-and-swap for acquiring a shared
> lock. But in practice it might be a wash.
I've tried compare-and-swap for shared acquisition and it performed
worse, didn't try your patch though as you seemed to have concluded it's
a wash after doing the unlocked test.
Greetings,
Andres Freund
--
Andres Freund http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
From | Date | Subject | |
---|---|---|---|
Next Message | Etsuro Fujita | 2013-09-27 09:24:43 | Re: Patch for fast gin cache performance improvement |
Previous Message | Pavan Deolasee | 2013-09-27 08:18:57 | Re: Patch for fail-back without fresh backup |