From: | Masahiko Sawada <masahiko(dot)sawada(at)2ndquadrant(dot)com> |
---|---|
To: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
Cc: | Robert Haas <robertmhaas(at)gmail(dot)com>, Sergei Kornilov <sk(at)zsrv(dot)org>, Mahendra Singh <mahi6run(at)gmail(dot)com>, Dilip Kumar <dilipbalaut(at)gmail(dot)com>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, Kyotaro HORIGUCHI <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>, Haribabu Kommi <kommi(dot)haribabu(at)gmail(dot)com>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, Amit Langote <langote_amit_f8(at)lab(dot)ntt(dot)co(dot)jp>, David Steele <david(at)pgmasters(dot)net>, Claudio Freire <klaussfreire(at)gmail(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Pavan Deolasee <pavan(dot)deolasee(at)gmail(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: [HACKERS] Block level parallel vacuum |
Date: | 2019-12-18 06:15:56 |
Message-ID: | CA+fd4k6gnUnmuLDW9zz93d3UWxnjOJ0ru4Gfs_YX1kfLap54=w@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Wed, 18 Dec 2019 at 15:03, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
>
> On Tue, Dec 17, 2019 at 6:07 PM Masahiko Sawada
> <masahiko(dot)sawada(at)2ndquadrant(dot)com> wrote:
> >
> > On Fri, 13 Dec 2019 at 15:50, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
> > >
> > > > > I think it shouldn't be more than the number with which we have
> > > > > created a parallel context, no? If that is the case, then I think it
> > > > > should be fine.
> > > >
> > > > Right. I thought that ReinitializeParallelDSM() with an additional
> > > > argument would reduce DSM but I understand that it doesn't actually
> > > > reduce DSM but just have a variable for the number of workers to
> > > > launch, is that right?
> > > >
> > >
> > > Yeah, probably, we need to change the nworkers stored in the context
> > > and it should be lesser than the value already stored in that number.
> > >
> > > > And we also would need to call
> > > > ReinitializeParallelDSM() at the beginning of vacuum index or vacuum
> > > > cleanup since we don't know that we will do either index vacuum or
> > > > index cleanup, at the end of index vacum.
> > > >
> > >
> > > Right.
> >
> > I've attached the latest version patch set. These patches requires the
> > gist vacuum patch[1]. The patch incorporated the review comments.
> >
>
> I was analyzing your changes related to ReinitializeParallelDSM() and
> it seems like we might launch more number of workers for the
> bulkdelete phase. While creating a parallel context, we used the
> maximum of "workers required for bulkdelete phase" and "workers
> required for cleanup", but now if the number of workers required in
> bulkdelete phase is lesser than a cleanup phase(as mentioned by you in
> one example), then we would launch more workers for bulkdelete phase.
Good catch. Currently when creating a parallel context the number of
workers passed to CreateParallelContext() is set not only to
pcxt->nworkers but also pcxt->nworkers_to_launch. We would need to
specify the number of workers actually to launch after created the
parallel context or when creating it. Or I think we call
ReinitializeParallelDSM() even the first time running index vacuum.
Regards,
--
Masahiko Sawada http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
From | Date | Subject | |
---|---|---|---|
Next Message | Masahiko Sawada | 2019-12-18 06:17:12 | Re: [HACKERS] Block level parallel vacuum |
Previous Message | Amit Langote | 2019-12-18 06:13:48 | Re: unsupportable composite type partition keys |