From: | Mark Kirkwood <mark(dot)kirkwood(at)catalyst(dot)net(dot)nz> |
---|---|
To: | Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, Haribabu Kommi <kommi(dot)haribabu(at)gmail(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Per table autovacuum vacuum cost limit behaviour strange |
Date: | 2014-08-29 04:35:35 |
Message-ID: | 54000317.6030604@catalyst.net.nz |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 29/08/14 08:56, Alvaro Herrera wrote:
> Robert Haas wrote:
>
>> I agree that you might not like that. But you might not like having
>> the table vacuumed slower than the configured rate, either. My
>> impression is that the time between vacuums isn't really all that
>> negotiable for some people. I had one customer who had horrible bloat
>> issues on a table that was vacuumed every minute; when we changed the
>> configuration so that it was vacuumed every 15 seconds, those problems
>> went away.
>
> Wow, that's extreme. For that case you can set
> autovacuum_vacuum_cost_limit to 0, which disables the whole thing and
> lets vacuum run at full speed -- no throttling at all. Would that
> satisfy the concern?
>
Well no - you might have a whole lot of big tables that you want vacuum
to not get too aggressive on, but a few small tables that are highly
volatile. So you want *them* vacuumed really fast to prevent them
becoming huge tables with only a few rows therein, but your system might
not be able to handle *all* your tables being vacuum full speed.
This is a fairly common scenario for (several) web CMS systems that tend
to want to have session and/cache tables that are small and extremely
volatile, plus the rest of the (real) data that is bigger and vastly
less volatile.
While there is a valid objection along the lines of "don't use a
database instead of memcache", it does seem reasonable that Postgres
should be able to cope with this type of workload.
Cheers
Mark
From | Date | Subject | |
---|---|---|---|
Next Message | Thomas Munro | 2014-08-29 08:53:32 | Re: Multithreaded SIGPIPE race in libpq on Solaris |
Previous Message | Etsuro Fujita | 2014-08-29 03:59:03 | Re: Optimization for updating foreign tables in Postgres FDW |