Re: decoupling table and index vacuum

From: Peter Geoghegan <pg(at)bowt(dot)ie>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Dilip Kumar <dilipbalaut(at)gmail(dot)com>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: decoupling table and index vacuum
Date: 2022-02-10 19:21:30
Message-ID: CAH2-WzmZExWhWt8B6A8N5xZcABOfusaowD__K2cB0nxx5-URfA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Thu, Feb 10, 2022 at 11:16 AM Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
> On Thu, Feb 10, 2022 at 3:10 AM Dilip Kumar <dilipbalaut(at)gmail(dot)com> wrote:
> > Actually I was not worried about the scan getting slow. What I was
> > worried about is if we keep ignoring the dead tuples for long time
> > then in the worst case if we have huge number of dead tuples in the
> > index maybe 80% to 90% and then suddenly if we get a lot of insertion
> > for the keys which can not use bottom up deletion (due to the key
> > range). So now we have a lot of pages which have only dead tuples but
> > we will still allocate new pages because we ignored the dead tuple %
> > and did not vacuum for a long time.
>
> It seems like a reasonable concern to me ... and I think it's somewhat
> related to my comments about trying to distinguish which dead tuples
> matter vs. which don't.

It's definitely a reasonable concern. But once you find yourself in
this situation, *every* index will need to be vacuumed anyway, pretty
much as soon as possible. There will be many LP_DEAD items in the
heap, which will be enough to force index vacuuming of all indexes.

--
Peter Geoghegan

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Nathan Bossart 2022-02-10 19:26:12 Re: make MaxBackends available in _PG_init
Previous Message Robert Haas 2022-02-10 19:16:11 Re: decoupling table and index vacuum