From: | Amit Kapila <amit(dot)kapila(at)huawei(dot)com> |
---|---|
To: | "'Jeff Janes'" <jeff(dot)janes(at)gmail(dot)com>, "'pgsql-hackers'" <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Resource Owner reassign Locks |
Date: | 2012-06-11 06:28:53 |
Message-ID: | 005701cd479b$7960ea70$6c22bf50$@kapila@huawei.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
I have few doubts regarding logic of ResourceOwnerRememberLock() and
ResourceOwnerForgetLock():
1. In function ResourceOwnerRememberLock(), when lock count is
MAX_RESOWNER_LOCKS, it will not add the lock to lock array but increment the
count to make it 11.
Now in ResourceOwnerForgetLock(), it cannot enter the if loop until the
count is <= MAX_RESOWNER_LOCKS. So how it will forget the lock.
2. ResourceOwnerForgetLock(), it decrements the lock count before removing,
so incase it doesn't find the lock in lockarray, the count will be
decremented inspite the array still contains the same number of locks.
-----Original Message-----
From: pgsql-hackers-owner(at)postgresql(dot)org
[mailto:pgsql-hackers-owner(at)postgresql(dot)org] On Behalf Of Jeff Janes
Sent: Monday, June 11, 2012 2:10 AM
To: pgsql-hackers
Subject: [HACKERS] Resource Owner reassign Locks
As discussed in several different email threads here and on performance ,
when using pg_dump a on large number of objects, the server has a quadratic
behavior in LockReassignCurrentOwner where it has to dig through the entire
local lock table to push one or two locks up from the portal being dropped
to its parent.
The attached patch fixes that by remembering up to 10 local locks, and
pushing them up specifically rather than digging through the entire lock
table. If the list overflows, then it reverts to the old behavior of
digging through the entire local lock table.
The same change was made to the case where the locks are being released,
rather than reassigned (i.e. subtransaction abort rather than commit). I
have no evidence that digging through the local lock table during bulk
release was ever a bottleneck, but it seemed inconsistent not to make that
change as well.
When it needs to forget a lock, it searches backwards in the list of
released lock and then just moves the last lock into the place of the one to
be forgotten. Other ResourceOwner Forget functions slide the entire list
down to close the gap, rather than using the selection-sort-like method. I
don't understand why they do that. If Forgets are strictly LIFO, then it
would not make a difference. If they are mostly LIFO but occasionally not,
maybe the insertion method would win over the selection method. From what I
can tell, Locks are mostly released in bulk anyway at transaction end, and
rarely released explicitly.
This patch reduces the time needed to dump 20,000 simple empty tables from
3m50.903s to 20.695s, and of course larger gains at larger numbers.
dropdb test; createdb test
for f in `seq 1 10000 4000000` ; do
perl -le "print qq{create table foo\$_ (k integer , v integer);} foreach
($f..$f+9999)" | psql test>& /dev/null
time pg_dump test|wc -c
done
For degrading performance in other cases, I think the best test case is
"pgbench -P" (implemented in another patch in this commitfest) which has a
loop which pushes one or two locks up from a portal to the parent (which
already owns them, due to previous rounds of the same
loop) very frequently. There might be a performance degradation of 0.5% or
so, but it is less than the run to run variation. I plan to run some longer
test to get a better estimate. If there is a degradation in that range, how
important is that?
I wanted to test a real worst case, which would be a malicious ordering of
lock releases (take 10 locks, release the first lock taken, then release the
other 9 in reverse order), but with the absence of UNLOCK TABLE command, I
can't figure out how to engineer that.
Cheers,
Jeff
From | Date | Subject | |
---|---|---|---|
Next Message | Amit Kapila | 2012-06-11 09:30:58 | Re: WIP patch for Todo Item : Provide fallback_application_name in contrib/pgbench, oid2name, and dblink |
Previous Message | Jeff Davis | 2012-06-11 06:16:12 | Re: 9.3: load path to mitigate load penalty for checksums |