Re: Berserk Autovacuum (let's save next Mandrill)

From: Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>
To: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>
Cc: Chris Travers <chris(dot)travers(at)adjust(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Darafei Komяpa Praliaskouski <me(at)komzpa(dot)net>, Michael Banck <mbanck(at)gmx(dot)net>, David Rowley <david(dot)rowley(at)2ndquadrant(dot)com>, PostgreSQL Developers <pgsql-hackers(at)lists(dot)postgresql(dot)org>
Subject: Re: Berserk Autovacuum (let's save next Mandrill)
Date: 2019-04-15 01:15:28
Message-ID: CAD21AoCzyUP+RZm39yH2PSkMp6zZHCipsWr0sd_s2TBV4VAfeA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Sun, Apr 14, 2019 at 4:51 AM Tomas Vondra
<tomas(dot)vondra(at)2ndquadrant(dot)com> wrote:
>
> On Thu, Apr 11, 2019 at 11:25:29AM +0200, Chris Travers wrote:
> > On Wed, Apr 10, 2019 at 5:21 PM Andres Freund <andres(at)anarazel(dot)de> wrote:
> >
> > Hi,
> >
> > On April 10, 2019 8:13:06 AM PDT, Alvaro Herrera
> > <alvherre(at)2ndquadrant(dot)com> wrote:
> > >On 2019-Mar-31, Darafei "Komяpa" Praliaskouski wrote:
> > >
> > >> Alternative point of "if your database is super large and actively
> > >written,
> > >> you may want to set autovacuum_freeze_max_age to even smaller values
> > >so
> > >> that autovacuum load is more evenly spread over time" may be needed.
> > >
> > >I don't think it's helpful to force emergency vacuuming more
> > >frequently;
> > >quite the contrary, it's likely to cause even more issues. We should
> > >tweak autovacuum to perform freezing more preemtively instead.
> >
> > I still think the fundamental issue with making vacuum less painful is
> > that the all indexes have to be read entirely. Even if there's not much
> > work (say millions of rows frozen, hundreds removed). Without that issue
> > we could vacuum much more frequently. And do it properly in insert only
> > workloads.
> >
> > So I see a couple of issues here and wondering what the best approach is.
> > The first is to just skip lazy_cleanup_index if no rows were removed. Is
> > this the approach you have in mind? Or is that insufficient?
>
> I don't think that's what Andres had in mind, as he explicitly mentioned
> removed rows. So just skipping lazy_cleanup_index when there were no
> deleted would not help in that case.
>
> What I think we could do is simply leave the tuple pointers in the table
> (and indexes) when there are only very few of them, and only do the
> expensive table/index cleanup once there's anough of them.

Yeah, we now have an infrastructure that skips index vacuuming by
leaving the tuples pointers. So we then can have a threshold for
autovacuum to invoke index vacuuming. Or an another idea is to delete
index entries more actively by index looking up instead of scanning
the whole index. It's proposed[1].

[1] I couldn't get the URL of the thread right now for some reason but
the thread subject is " [WIP] [B-Tree] Retail IndexTuple deletion".

Regards,

--
Masahiko Sawada
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Masahiko Sawada 2019-04-15 01:31:34 Re: Berserk Autovacuum (let's save next Mandrill)
Previous Message Masahiko Sawada 2019-04-15 00:28:39 Re: New vacuum option to do only freezing