From: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
---|---|
To: | Dilip Kumar <dilipbalaut(at)gmail(dot)com> |
Cc: | Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, Mahendra Singh <mahi6run(at)gmail(dot)com>, Kyotaro HORIGUCHI <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>, Robert Haas <robertmhaas(at)gmail(dot)com>, Haribabu Kommi <kommi(dot)haribabu(at)gmail(dot)com>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, Amit Langote <Langote_Amit_f8(at)lab(dot)ntt(dot)co(dot)jp>, David Steele <david(at)pgmasters(dot)net>, Claudio Freire <klaussfreire(at)gmail(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Pavan Deolasee <pavan(dot)deolasee(at)gmail(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: [HACKERS] Block level parallel vacuum |
Date: | 2019-10-17 10:30:36 |
Message-ID: | CAA4eK1K+gh3MBoFbSnBqhCY3gZ4Ye0C8kH92O7vDkBo4YfCOeA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Thu, Oct 17, 2019 at 3:25 PM Dilip Kumar <dilipbalaut(at)gmail(dot)com> wrote:
>
> On Thu, Oct 17, 2019 at 2:12 PM Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com> wrote:
> >
> > On Thu, Oct 17, 2019 at 5:30 PM Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
> > >
> > > Another point in this regard is that the user anyway has an option to
> > > turn off the cost-based vacuum. By default, it is anyway disabled.
> > > So, if the user enables it we have to provide some sensible behavior.
> > > If we can't come up with anything, then, in the end, we might want to
> > > turn it off for a parallel vacuum and mention the same in docs, but I
> > > think we should try to come up with a solution for it.
> >
> > I finally got your point and now understood the need. And the idea I
> > proposed doesn't work fine.
> >
> > So you meant that all workers share the cost count and if a parallel
> > vacuum worker increase the cost and it reaches the limit, does the
> > only one worker sleep? Is that okay even though other parallel workers
> > are still running and then the sleep might not help?
> >
Remember that the other running workers will also increase
VacuumCostBalance and whichever worker finds that it becomes greater
than VacuumCostLimit will reset its value and sleep. So, won't this
make sure that overall throttling works the same?
> I agree with this point. There is a possibility that some of the
> workers who are doing heavy I/O continue to work and OTOH other
> workers who are doing very less I/O might become the victim and
> unnecessarily delay its operation.
>
Sure, but will it impact the overall I/O? I mean to say the rate
limit we want to provide for overall vacuum operation will still be
the same. Also, isn't a similar thing happens now also where heap
might have done a major portion of I/O but soon after we start
vacuuming the index, we will hit the limit and will sleep.
I think this might not be the perfect solution and we should try to
come up with something else if this doesn't seem to be working. Have
you guys thought about the second solution I mentioned in email [1]
(Before launching workers, we need to compute the remaining I/O ....)?
Any other better ideas?
--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
From | Date | Subject | |
---|---|---|---|
Next Message | Andres Freund | 2019-10-17 11:02:05 | Re: Remaining calls of heap_close/heap_open in the tree |
Previous Message | Tom Lane | 2019-10-17 10:24:10 | Re: v12 and pg_restore -f- |