From: | Alvaro Herrera <alvherre(at)2ndquadrant(dot)com> |
---|---|
To: | Haribabu Kommi <kommi(dot)haribabu(at)gmail(dot)com> |
Cc: | Mark Kirkwood <mark(dot)kirkwood(at)catalyst(dot)net(dot)nz>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Per table autovacuum vacuum cost limit behaviour strange |
Date: | 2014-02-14 21:32:27 |
Message-ID: | 20140214213227.GF6342@eldon.alvh.no-ip.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Haribabu Kommi escribió:
> I changed the balance cost calculations a little bit to give priority to
> the user provided per table autovacuum parameters.
> If any user specified per table vacuum parameters exists and those are
> different with guc vacuum parameters then the
> balance cost calculations will not include that worker in calculation. Only
> the cost is distributed between other workers
> with specified guc vacuum cost parameter.
>
> The problem in this calculation is if the user provides same guc values to
> the per table values also then it doesn't consider them in calculation.
I think this is a strange approach to the problem, because if you
configure the backends just so, they are completely ignored instead of
being adjusted. And this would have action-at-a-distance consequences
because if you change the defaults in postgresql.conf you end up with
completely different behavior on the tables for which you have carefully
tuned the delay so that they are ignored in rebalance calculations.
I think that rather than ignoring some backends completely, we should be
looking at how to "weight" the balancing calculations among all the
backends in some smart way that doesn't mean they end up with the
default values of limit, which AFAIU is what happens now -- which is
stupid. Not real sure how to do that, perhaps base it on the
globally-configured ratio of delay/limit vs. the table-specific ratio.
What I mean is that perhaps the current approach is all wrong and we
need to find a better algorithm to suit this case and more generally.
Of course, I don't mean to say that it should behave completely
differently than now in normal cases, only that it shouldn't give
completely stupid results in corner cases such as this one.
As an example, suppose that global limit=200 and global delay=20 (the
defaults). Then we have a global ratio of 5. If all three tables being
vacuumed currently are using the default values, then they all have
ratio=5 and therefore all should have the same limit and delay settings
applied after rebalance. Now, if two tables have ratio=5 and one table
has been configured to have a very fast vacuum, that is limit=10000,
then ratio for that table is 10000/20=500. Therefore that table should
be configured, after rebalance, to have a limit and delay that are 100
times faster than the settings for the other two tables. (And there is
a further constraint that the total delay per "limit unit" should be
so-and-so to accomodate getting the correct total delay per limit unit.)
I haven't thought about how to code that, but I don't think it should be
too difficult. Want to give it a try? I think it makes sense to modify
both the running delay and the running limit to achieve whatever ratio
we come up with, except that delay should probably not go below 10ms
because, apparently, some platforms have that much of sleep granularity
and it wouldn't really work to have a smaller delay.
Am I making sense?
--
Álvaro Herrera http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
From | Date | Subject | |
---|---|---|---|
Next Message | Bruce Momjian | 2014-02-14 22:02:26 | Re: [SQL] Comparison semantics of CHAR data type |
Previous Message | David Beck | 2014-02-14 21:16:12 | Re: New hook after raw parsing, before analyze |