From: | Pavan Deolasee <pavan(dot)deolasee(at)gmail(dot)com> |
---|---|
To: | Fujii Masao <masao(dot)fujii(at)gmail(dot)com> |
Cc: | PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: reloption to prevent VACUUM from truncating empty pages at the end of relation |
Date: | 2018-04-18 17:42:40 |
Message-ID: | CABOikdOKT0oAOa8_ynm+eOJc9BGL82hod_L1-Giw4BCu2PP+FA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Wed, Apr 18, 2018 at 10:50 PM, Fujii Masao <masao(dot)fujii(at)gmail(dot)com> wrote:
>
>
> I'm not sure if it's safe to cancel forcibly VACUUM's truncation during
> scaning shared_buffers. That scan happens after WAL-logging and before
> the actual truncation.
>
>
Ah ok. I misread your proposal. This is about the shared_buffers scan
in DropRelFileNodeBuffers() and we can't cancel that operation.
What if we remember the buffers as seen by count_nondeletable_pages() and
then just discard those specific buffers instead of scanning the entire
shared_buffers again? Surely we revisit all to-be-truncated blocks before
actual truncation. So we already know which buffers to discard. And we're
holding exclusive lock at that point, so nothing can change underneath. Of
course, we can't really remember a large number of buffers, so we can do
this in small chunks. Scan last K blocks, remember those K buffers, discard
those K buffers, truncate the relation and then try for next K blocks. If
another backend requests lock on the table, we give up or retry after a
while.
Thanks,
Pavan
--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
From | Date | Subject | |
---|---|---|---|
Next Message | MauMau | 2018-04-18 18:03:23 | Truncation failure in autovacuum results in data corruption (duplicate keys) |
Previous Message | Peter Geoghegan | 2018-04-18 17:32:44 | Re: WIP: Covering + unique indexes. |