From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Pavan Deolasee <pavan(dot)deolasee(at)gmail(dot)com> |
Cc: | Fujii Masao <masao(dot)fujii(at)gmail(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: reloption to prevent VACUUM from truncating empty pages at the end of relation |
Date: | 2018-04-18 18:13:31 |
Message-ID: | 6909.1524075211@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Pavan Deolasee <pavan(dot)deolasee(at)gmail(dot)com> writes:
> What if we remember the buffers as seen by count_nondeletable_pages() and
> then just discard those specific buffers instead of scanning the entire
> shared_buffers again?
That's an idea.
> Surely we revisit all to-be-truncated blocks before
> actual truncation. So we already know which buffers to discard. And we're
> holding exclusive lock at that point, so nothing can change underneath. Of
> course, we can't really remember a large number of buffers, so we can do
> this in small chunks.
Hm? We're deleting the last N consecutive blocks, so it seems like we
just need to think in terms of clearing that range. I think this can
just be a local logic change inside DropRelFileNodeBuffers().
You could optimize it fairly easily with some heuristic that compares
N to sizeof shared buffers; if it's too large a fraction, the existing
implementation will be cheaper than a bunch of hashtable probes.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2018-04-18 18:34:03 | Re: Deadlock in multiple CIC. |
Previous Message | MauMau | 2018-04-18 18:03:23 | Truncation failure in autovacuum results in data corruption (duplicate keys) |