From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Alvaro Herrera <alvherre(at)2ndquadrant(dot)com> |
Cc: | Mark Kirkwood <mark(dot)kirkwood(at)catalyst(dot)net(dot)nz>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, Haribabu Kommi <kommi(dot)haribabu(at)gmail(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Per table autovacuum vacuum cost limit behaviour strange |
Date: | 2014-08-28 22:01:41 |
Message-ID: | CA+TgmoaON=w55j7HzZZi1ZmmGpc+w1-+2MrDwZs_nXNvcBrObQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Thu, Aug 28, 2014 at 4:56 PM, Alvaro Herrera
<alvherre(at)2ndquadrant(dot)com> wrote:
> Robert Haas wrote:
>> I agree that you might not like that. But you might not like having
>> the table vacuumed slower than the configured rate, either. My
>> impression is that the time between vacuums isn't really all that
>> negotiable for some people. I had one customer who had horrible bloat
>> issues on a table that was vacuumed every minute; when we changed the
>> configuration so that it was vacuumed every 15 seconds, those problems
>> went away.
>
> Wow, that's extreme. For that case you can set
> autovacuum_vacuum_cost_limit to 0, which disables the whole thing and
> lets vacuum run at full speed -- no throttling at all. Would that
> satisfy the concern?
Well, maybe, if you want to run completely unthrottled. But I have no
evidence that's a common desire.
>> Now that is obviously more a matter for autovacuum_naptime
>> than this option, but the point seems general to me: if you need the
>> table vacuumed every N seconds, minutes, or hours, and it only gets
>> vacuumed every 2N or 3N or 5N seconds, minutes, or hours because there
>> are other autovacuum workers running, the table is going to bloat.
>
> There might be another problem here which is that if you have all your
> workers busy because they are vacuuming large tables that don't have
> churn high enough to warrant disrupting the whole environment (thus low
> cost_limit), then the table will bloat no matter what you set its cost
> limit to. So there's not only a matter of a low enough naptime (which
> is a bad thing for the rest of the system, also) but also one of
> something similar to priority inversion; should you speed up the
> vacuuming of those large tables so that one worker is freed soon enough
> to get to the high-churn table?
I don't think so. I continue to believe that the we need to provide
the user with the tools to be certain that table X will get vacuumed
at least every Y seconds/minutes/hours. To me, allowing the user to
set a rate that the system will not adjust or manipulate in any way
makes this a lot easier than anything else we might do.
> Was the solution for that customer to set an external tool running
> vacuum on that table?
Nope, we just changed settings.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Tomas Vondra | 2014-08-28 22:02:45 | Re: 9.5: Memory-bounded HashAgg |
Previous Message | Thomas Munro | 2014-08-28 21:35:08 | Multithreaded SIGPIPE race in libpq on Solaris |