From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Stefan Kaltenbrunner <stefan(at)kaltenbrunner(dot)cc> |
Cc: | Heikki Linnakangas <heikki(dot)linnakangas(at)enterprisedb(dot)com>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: reducing the overhead of frequent table locks - now, with WIP patch |
Date: | 2011-06-06 04:12:32 |
Message-ID: | BANLkTin_EXSCwdsxKbsSuPAfF-0occa-dg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Sun, Jun 5, 2011 at 10:16 PM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
> I'm definitely interested in investigating what to do
> about that, but I don't think it's this patch's problem to fix all of
> our lock manager bottlenecks.
I did some further investigation of this. It appears that more than
99% of the lock manager lwlock traffic that remains with this patch
applied has locktag_type == LOCKTAG_VIRTUALTRANSACTION. Every SELECT
statement runs in a separate transaction, and for each new transaction
we run VirtualXactLockTableInsert(), which takes a lock on the vxid of
that transaction, so that other processes can wait for it. That
requires acquiring and releasing a lock manager partition lock, and we
have to do the same thing a moment later at transaction end to dump
the lock.
A quick grep seems to indicate that the only places where we actually
make use of those VXID locks are in DefineIndex(), when CREATE INDEX
CONCURRENTLY is in use, and during Hot Standby, when max_standby_delay
expires. Considering that these are not commonplace events, it seems
tremendously wasteful to incur the overhead for every transaction. It
might be possible to make the lock entry spring into existence "on
demand" - i.e. if a backend wants to wait on a vxid entry, it creates
the LOCK and PROCLOCK objects for that vxid. That presents a few
synchronization challenges, and plus we have to make sure that the
backend that's just been "given" a lock knows that it needs to release
it, but those seem like they might be manageable problems, especially
given the new infrastructure introduced by the current patch, which
already has to deal with some of those issues. I'll look into this
further.
It's likely that if we lick this problem, the BufFreelistLock and
BufMappingLocks are going to be the next hot spot. Of course, we're
ignoring the ten-thousand pound gorilla in the corner, which is that
on write workloads we have a pretty bad contention problem with
WALInsertLock, which I fear will not be so easily addressed. But one
problem at a time, I guess.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Itagaki Takahiro | 2011-06-06 04:19:41 | Re: heap vacuum & cleanup locks |
Previous Message | Tom Lane | 2011-06-06 02:33:06 | Re: Assert failure when rechecking an exclusion constraint |