From: | Michael Paquier <michael(dot)paquier(at)gmail(dot)com> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Jacob Champion <pchampion(at)pivotal(dot)io>, PostgreSQL mailing lists <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: [PATCH] Assert that the correct locks are held when calling PageGetLSN() |
Date: | 2017-10-02 14:19:54 |
Message-ID: | CAB7nPqQ_P7FRL9sQWzwJ1WojHo6phRG9xtp0PR+qWo3G0V+o8A@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Mon, Oct 2, 2017 at 9:09 PM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
> I think the first question we ought to be asking ourselves is whether
> any of the PageGetLSN -> BufferGetLSNAtomic changes the patch
> introduces are live bugs. If they are, then we ought to fix those
> separately (and probably back-patch). If they are not, then we need
> to think about how to adjust the patch so that it doesn't complain
> about things that are in fact OK.
If you look at each item one by one that the patch touches based on
the contract defined in transam/README...
--- a/src/backend/access/gist/gist.c
+++ b/src/backend/access/gist/gist.c
@@ -640,7 +640,7 @@ gistdoinsert(Relation r, IndexTuple itup, Size
freespace, GISTSTATE *giststate)
}
stack->page = (Page) BufferGetPage(stack->buffer);
- stack->lsn = PageGetLSN(stack->page);
+ stack->lsn = BufferGetLSNAtomic(stack->buffer);
There is an incorrect use of PageGetLSN here. A shared lock can be
taken on the page but there is no buffer header lock used when using
PageGetLSN.
@@ -890,7 +890,7 @@ gistFindPath(Relation r, BlockNumber child,
OffsetNumber *downlinkoffnum)
break;
}
- top->lsn = PageGetLSN(page);
+ top->lsn = BufferGetLSNAtomic(buffer);
Here as well only a shared lock is taken on the page.
@@ -61,7 +61,7 @@ gistkillitems(IndexScanDesc scan)
* read. killedItems could be not valid so LP_DEAD hints applying is not
* safe.
*/
- if (PageGetLSN(page) != so->curPageLSN)
+ if (BufferGetLSNAtomic(buffer) != so->curPageLSN)
{
UnlockReleaseBuffer(buffer);
so->numKilled = 0; /* reset counter */
@@ -384,7 +384,7 @@ gistScanPage(IndexScanDesc scan, GISTSearchItem
*pageItem, double *myDistances,
* safe to apply LP_DEAD hints to the page later. This allows us to drop
* the pin for MVCC scans, which allows vacuum to avoid blocking.
*/
- so->curPageLSN = PageGetLSN(page);
+ so->curPageLSN = BufferGetLSNAtomic(buffer);
Those two as well only use a shared lock.
@@ -249,7 +249,7 @@ gistbulkdelete(IndexVacuumInfo *info,
IndexBulkDeleteResult *stats,
ptr = (GistBDItem *) palloc(sizeof(GistBDItem));
ptr->blkno = ItemPointerGetBlockNumber(&(idxtuple->t_tid));
- ptr->parentlsn = PageGetLSN(page);
+ ptr->parentlsn = BufferGetLSNAtomic(buffer);
ptr->next = stack->next;
stack->next = ptr;
Here also a shared lock is only taken, that's a VACUUM code path for
Gist indexes.
+++ b/src/backend/access/nbtree/nbtsearch.c
@@ -1224,7 +1224,7 @@ _bt_readpage(IndexScanDesc scan, ScanDirection
dir, OffsetNumber offnum)
* safe to apply LP_DEAD hints to the page later. This allows us to drop
* the pin for MVCC scans, which allows vacuum to avoid blocking.
*/
- so->currPos.lsn = PageGetLSN(page);
+ so->currPos.lsn = BufferGetLSNAtomic(so->currPos.buf);
Here the caller of _bt_readpage should have taken a lock, but the page
is not necessarily pinned. Still, _bt_getbuf makes sure that the pin
is done.
+++ b/src/backend/access/nbtree/nbtutils.c
@@ -1772,7 +1772,7 @@ _bt_killitems(IndexScanDesc scan)
return;
page = BufferGetPage(buf);
- if (PageGetLSN(page) == so->currPos.lsn)
+ if (BufferGetLSNAtomic(buf) == so->currPos.lsn)
so->currPos.buf = buf;
Same here thanks to _bt_getbuf.
So those bits could be considered for integration.
Also, if I read the gist code correctly, there is one other case in
gistFindCorrectParent. And in btree, there is one occurence in
_bt_split. In XLogRecordAssemble, there could be more checks regarding
the locks taken on the buffer registered.
--
Michael
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Haas | 2017-10-02 14:43:04 | Re: parallelize queries containing initplans |
Previous Message | Claudio Freire | 2017-10-02 14:02:52 | Re: Vacuum: allow usage of more than 1GB of work mem |