From: | Alvaro Herrera <alvherre(at)2ndquadrant(dot)com> |
---|---|
To: | Mark Kirkwood <mark(dot)kirkwood(at)catalyst(dot)net(dot)nz> |
Cc: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, Haribabu Kommi <kommi(dot)haribabu(at)gmail(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Per table autovacuum vacuum cost limit behaviour strange |
Date: | 2014-08-26 22:27:00 |
Message-ID: | 20140826222700.GA23139@eldon.alvh.no-ip.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Alvaro Herrera wrote:
> So my proposal is a bit more complicated. First we introduce the notion
> of a single number, to enable sorting and computations: the "delay
> equivalent", which is the cost_limit divided by cost_delay.
Here's a patch that implements this idea. As you see this is quite a
bit more complicated that Haribabu's proposal.
There are two holes in this:
1. if you ALTER DATABASE to change vacuum delay for a database, those
values are not considered in the global equiv delay. I don't think this
is very important and anyway we haven't considered this very much, so
it's okay if we don't handle it.
2. If you have a "fast worker" that's only slightly faster than regular
workers, it will become slower in some cases. This is explained in a
FIXME comment in the patch.
I don't really have any more time to invest in this, but I would like to
see it in 9.4. Mark, would you test this? Haribabu, how open are you
to fixing point (2) above?
--
Álvaro Herrera http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
Attachment | Content-Type | Size |
---|---|---|
per_table_vacuum_para_v3.patch | text/x-diff | 17.1 KB |
From | Date | Subject | |
---|---|---|---|
Next Message | Mark Kirkwood | 2014-08-26 22:51:11 | Re: Per table autovacuum vacuum cost limit behaviour strange |
Previous Message | Tom Lane | 2014-08-26 22:11:33 | Re: jsonb format is pessimal for toast compression |