From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Simon Riggs <simon(at)2ndquadrant(dot)com> |
Cc: | Dave Page <dpage(at)pgadmin(dot)org>, Stefan Kaltenbrunner <stefan(at)kaltenbrunner(dot)cc>, Dimitri Fontaine <dimitri(at)2ndquadrant(dot)fr>, Heikki Linnakangas <heikki(dot)linnakangas(at)enterprisedb(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: reducing the overhead of frequent table locks - now, with WIP patch |
Date: | 2011-06-07 20:52:43 |
Message-ID: | BANLkTimvYTGVa9k+1TPUvxyhBQcsz5jDig@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Tue, Jun 7, 2011 at 12:51 PM, Simon Riggs <simon(at)2ndquadrant(dot)com> wrote:
> Stefan/Robert's observation that we perform a
> VirtualXactLockTableInsert() to no real benefit is a good one.
>
> It leads to the following simple patch to remove one lock table hit
> per transaction. It's a lot smaller impact on the LockMgr locks, but
> it will still be substantial. Performance tests please?
>
> This patch is much less invasive and has impact only on CREATE INDEX
> CONCURRENTLY and Hot Standby. It's taken me about 2 hours to write and
> test and there's no way it will cause any delay at all to the release
> schedule. (Though I'm sure Robert can improve it).
Incidentally, I spent the morning (before we got off on this tangent)
writing a patch to make VXID locks spring into existence on demand
instead of creating them for every transaction. This applies on top
of my fastlock patch and fits in quite nicely with the existing
infrastructure that patch creates, and it helps modestly. Well,
according to one metric, at least, it helps dramatically: traffic on
each lock manager partition locks drops from hundreds of thousands of
lock requests in a five minute period to just a few hundred. But the
actual user-visible performance benefit is fairly modest - it goes
from ~36K TPS unpatched to ~129K TPS with the fast relation locks
alone to ~138K TPS with the fast relation locks plus a similar hack
for fast VXID locks (all results with pgbench -c 36 -j 36 -n -S -T 300
on a Nate-Boley-provided 24-core box). Now, I'm not going to knock a
7% performance improvement and the benefit may be larger on Stefan's
80-core box and I think it's definitely worth going to the trouble to
implement that optimization for 9.2, but it appears at least based on
the testing that I've done so far that the fast relation locks are the
big win and after that it gets much harder to make an improvement. If
we were to fix ONLY the vxid issue in 9.1 as you were advocating, the
benefit would probably be much less, because at least in my tests, the
fast relation lock patch increases overall system throughput
sufficiently to cause a 12x increase in contention due to vxid
traffic.
With both the fast-relation locks and the fast-vxid locks in place, as
I mentioned, the lock manager partition lock contention is completely
gone; in fact the lock manager partition traffic is pretty much gone.
The remaining contention comes mostly from the free list locks (blk
~13%) and the buffer mapping locks (which were roughly: 800k shacq,
12000 exacq, 850 blk) Interestingly, I saw that one buffer mapping
lock got about 5x hotter than the others, which is odd, but possibly
harmless, since the absolute amount of blocking is really rather small
(~0.1%). At least for read performance, we may need to start looking
less at reducing lock contention and more at making the actual
underlying operations faster.
In the process of doing all of this, I discovered that I had neglected
to update GetLockConflicts() and, consequently, fastlock-v2 is broken
insofar as CREATE INDEX CONCURRENTLY and Hot Standby are concerned. I
will fix that and post an updated version; and I'll also post the
follow-on patch to accelerate the VXID locks at that time. In the
meantime, I would appreciate any review or testing of the remainder of
the patch.
> If we combine this patch with Koichi-san's recommended changes to the
> number of lock partitions, we will have considerable impact for 9.1.
> Robert will still get his day in the sun, just with 9.2.
I am at this point of the viewpoint that there is little point in
raising the number of lock partitions. If you are doing very simple
SELECT statements across a large number of tables, then increasing the
number of lock partitions will help. On read-write workloads, there's
really no benefit, because WALInsertLock contention is the bottleneck.
And on read-only workloads that only touch one or a handful of
tables, the individual lock manager partitions where the locks fall
get very hot regardless of how many partitions you have. Now that
does still leave some space for improvement - specifically, lots of
tables, read-only or read-mostly - but the fast-relation-lock and
fast-vxid-lock stuff will address those bottlenecks far more
thoroughly. And increasing the number of lock partitions also has a
downside: it will slow down end-of-transaction cleanup, which is
already an area where we know we have problems.
There might be some point in raising the number of buffer mapping
partitions, but I don't know how to create a test case where it's
actually material, especially without the fastlock stuff.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Haas | 2011-06-07 21:00:43 | Re: BUG #6041: Unlogged table was created bad in slave node |
Previous Message | Tom Lane | 2011-06-07 20:23:01 | Re: contrib/citext versus collations |