From: | Dan Ports <drkp(at)csail(dot)mit(dot)edu> |
---|---|
To: | Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov> |
Cc: | heikki(dot)linnakangas(at)enterprisedb(dot)com, YAMAMOTO Takashi <yamt(at)mwd(dot)biglobe(dot)ne(dot)jp>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: SSI bug? |
Date: | 2011-02-23 01:49:51 |
Message-ID: | 20110223014951.GF61128@csail.mit.edu |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Tue, Feb 22, 2011 at 05:54:49PM -0600, Kevin Grittner wrote:
> I'm not sure it's safe to assume that the index page won't get
> reused before the local lock information is cleared. In the absence
> of a clear proof that it is safe, or some enforcement mechanism to
> ensure that it is, I don't think we should make this assumption.
> Off-hand I can't think of a clever way to make this safe which would
> cost less than taking out the LW lock and checking the definitive
> shared memory HTAB, but that might be for lack of creative thinking
> at the moment..
Hmm. Yeah, I wasn't sure about that one, and having now thought about
it some more I think it isn't safe -- consider adding a lock on an
index page concurrently with another backend merging that page into
another one.
The obvious solution to me is to just keep the lock on both the old and
new page. The downside is that because this requires allocating a new
lock and is in a context where we're not allowed to fail, we'll need to
fall back on acquiring the relation lock just as we do for page splits.
I was going to bemoan the extra complexity this would add -- but
actually, couldn't we just replace PredicateLockPageCombine with a call
to PredicateLockPageSplit since they'd now do the same thing?
> The only alternative I see would be to use some form of asynchronous
> notification of the new locks so that the local table can be
> maintained. That seems overkill without some clear evidence that it
> is needed.
I agree. It is certainly weird and undesirable that the backend-local
lock table is not always accurate, but I don't see a good way to keep
it up to date without the cure being worse than the disease.
> I *really* wouldn't want to go back to needing LW locks
> to maintain this info; as you know (and stated only for the benefit
> of the list), it was a pretty serious contention point in early
> profiling and adding the local table was a big part of getting an
> early benchmark down from a 14+% performance hit for SSI to a 1.8%
> performance hit.
Yes, it's definitely important for a backend to be able to check
whether it's already holding a lock (even if that's just a hint)
without having to take locks.
Let me add one more piece of info for the benefit of the list: a
backend's local lock table contains not just locks held by the backend,
but also an entry and refcount for every parent of a lock it holds.
This is used to determine when to promote to one of the coarser-grained
parent locks. It's both unnecessary and undesirable for that info to be
in shared memory.
Dan
--
Dan R. K. Ports MIT CSAIL http://drkp.net/
From | Date | Subject | |
---|---|---|---|
Next Message | Alvaro Herrera | 2011-02-23 01:55:25 | Re: Snapshot synchronization, again... |
Previous Message | Robert Haas | 2011-02-23 01:47:55 | Re: Binary in/out for aclitem |