From: | Andres Freund <andres(at)anarazel(dot)de> |
---|---|
To: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
Cc: | Robert Haas <robertmhaas(at)gmail(dot)com>, Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Reviewing freeze map code |
Date: | 2016-06-21 03:51:38 |
Message-ID: | 20160621035138.rwbbqcnaoqgnd5iv@alap3.anarazel.de |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 2016-06-21 08:59:13 +0530, Amit Kapila wrote:
> Can we consider to use some strategy to avoid deadlocks without releasing
> the lock on old page? Consider if we could have a mechanism such that
> RelationGetBufferForTuple() will ensure that it always returns a new buffer
> which has targetblock greater than the old block (on which we already held
> a lock). I think here tricky part is whether we can get anything like that
> from FSM. Also, there could be cases where we need to extend the heap when
> there were pages in heap with space available, but we have ignored them
> because there block number is smaller than the block number on which we
> have lock.
I can't see that being acceptable, from a space-usage POV.
> > So far the best idea I have - and it's really not a good one - is to
> > invent a new hint-bit that tells concurrent updates to acquire a
> > heavyweight tuple lock, while releasing the page-level lock. If that
> > hint bit does not require any other modifications - and we don't need an
> > xid in xmax for this use case - that'll avoid doing all the other
> > `already_marked` stuff early, which should address the correctness
> > issue.
> >
>
> Don't we need to clear such a flag in case of error? Also don't we need to
> reset it later, like when modifying the old page later before WAL.
If the flag just says "acquire a heavyweight lock", then there's no need
for that. That's cheap enough to just do if it's errorneously set. At
least I can't see any reason.
> > It's kinda invasive though, and probably has performance
> > implications.
> >
>
> Do you see performance implication due to requirement of heavywieht tuple
> lock in more cases than now or something else?
Because of that, yes.
> Some others ways could be:
>
> Before releasing the lock on buffer containing old tuple, clear the VM and
> visibility info from page and WAL log it. I think this could impact
> performance depending on how frequently we need to perform this action.
Doubling the number of xlog inserts in heap_update would certainly be
measurable :(. My guess is that the heavyweight tuple lock approach will
be less expensive.
Andres
From | Date | Subject | |
---|---|---|---|
Next Message | Amit Kapila | 2016-06-21 04:32:03 | Re: parallel.c is not marked as test covered |
Previous Message | Thomas Munro | 2016-06-21 03:46:02 | Re: Reviewing freeze map code |