Re: reducing the overhead of frequent table locks - now, with WIP patch

From: Heikki Linnakangas <heikki(dot)linnakangas(at)enterprisedb(dot)com>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Stefan Kaltenbrunner <stefan(at)kaltenbrunner(dot)cc>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: reducing the overhead of frequent table locks - now, with WIP patch
Date: 2011-06-06 12:02:07
Message-ID: 4DECC1BF.2030209@enterprisedb.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On 06.06.2011 07:12, Robert Haas wrote:
> I did some further investigation of this. It appears that more than
> 99% of the lock manager lwlock traffic that remains with this patch
> applied has locktag_type == LOCKTAG_VIRTUALTRANSACTION. Every SELECT
> statement runs in a separate transaction, and for each new transaction
> we run VirtualXactLockTableInsert(), which takes a lock on the vxid of
> that transaction, so that other processes can wait for it. That
> requires acquiring and releasing a lock manager partition lock, and we
> have to do the same thing a moment later at transaction end to dump
> the lock.
>
> A quick grep seems to indicate that the only places where we actually
> make use of those VXID locks are in DefineIndex(), when CREATE INDEX
> CONCURRENTLY is in use, and during Hot Standby, when max_standby_delay
> expires. Considering that these are not commonplace events, it seems
> tremendously wasteful to incur the overhead for every transaction. It
> might be possible to make the lock entry spring into existence "on
> demand" - i.e. if a backend wants to wait on a vxid entry, it creates
> the LOCK and PROCLOCK objects for that vxid. That presents a few
> synchronization challenges, and plus we have to make sure that the
> backend that's just been "given" a lock knows that it needs to release
> it, but those seem like they might be manageable problems, especially
> given the new infrastructure introduced by the current patch, which
> already has to deal with some of those issues. I'll look into this
> further.

At the moment, the transaction with given vxid acquires an ExclusiveLock
on the vxid, and anyone who wants to wait for it to finish acquires a
ShareLock. If we simply reverse that, so that the transaction itself
takes ShareLock, and anyone wanting to wait on it take an ExclusiveLock,
will this fastlock patch bust this bottleneck too?

--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Robert Haas 2011-06-06 12:06:10 Re: heap vacuum & cleanup locks
Previous Message Alexander Korotkov 2011-06-06 12:00:28 Re: WIP: Fast GiST index build