From: | Simon Riggs <simon(at)2ndQuadrant(dot)com> |
---|---|
To: | PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Locking end of indexes during VACUUM |
Date: | 2011-08-03 20:44:37 |
Message-ID: | CA+U5nMKbq1iLUmoCw9mEkT0GOALQ8+H-odZpVwpv0Uaj_++G2A@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
During btvacuumscan(), we lock the index for extension and then wait
to acquire a cleanup lock on the last page. Then loop until we find a
point where the index has not expanded again during our wait for lock
on that last page. On a busy index this can take some time, especially
when people regularly access data with the highest values in the
index.
The comments there say "It is critical that we visit all leaf pages,
including ones added after we start the scan, else we might fail to
delete some deletable tuples."
What seems strange is that we make no attempt to check whether we have
already identified all tuples being removed by the VACUUM. We have the
number of dead tuples we are looking for and we track the number of
tuples we have deleted from the index, so we could easily make this
check early and avoid waiting.
Can we avoid scanning all pages once we have proven we have all dead tuples?
--
Simon Riggs http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Haas | 2011-08-03 21:04:56 | Re: mosbench revisited |
Previous Message | Tom Lane | 2011-08-03 20:38:16 | Re: mosbench revisited |