From: | Greg Stark <gsstark(at)mit(dot)edu> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: really lazy vacuums? |
Date: | 2011-03-14 23:40:44 |
Message-ID: | AANLkTikEUUOhR5gX4b2gXU3YkQB-VBkWz7YPg-cExXdX@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Mon, Mar 14, 2011 at 8:33 PM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
> I'm not sure about that either, although I'm not sure of the reverse
> either. But before I invest any time in it, do you have any other
> good ideas for addressing the "it stinks to scan the entire index
> every time we vacuum" problem? Or for generally making vacuum
> cheaper?
You could imagine an index am that instead of scanning the index just
accumulated all the dead tuples in a hash table and checked it before
following any index link. Whenever the hash table gets too big it
could do a sequential scan and prune any pointers to those tuples and
start a new hash table.
That would work well if there are frequent vacuums finding a few
tuples per vacuum. It might even allow us to absorb dead tuples from
"retail" vacuums so we could get rid of line pointers earlier. But it
would involve more WAL-logged operations and incur an extra overhead
on each index lookup.
--
greg
From | Date | Subject | |
---|---|---|---|
Next Message | Nikhil Sontakke | 2011-03-14 23:42:04 | Re: Fwd: index corruption in PG 8.3.13 |
Previous Message | Josh Berkus | 2011-03-14 20:46:47 | Re: dependency between numbers keywords and parser speed |