From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Jeff Janes <jeff(dot)janes(at)gmail(dot)com> |
Cc: | Greg Smith <greg(at)2ndquadrant(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Cost limited statements RFC |
Date: | 2013-06-06 20:02:00 |
Message-ID: | CA+TgmoZ=J=o6besGegPTCjam=Bb-QwVVnAKJ1qb4V_dCf9-tmg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Thu, Jun 6, 2013 at 3:34 PM, Jeff Janes <jeff(dot)janes(at)gmail(dot)com> wrote:
> On Fri, May 24, 2013 at 11:51 AM, Greg Smith <greg(at)2ndquadrant(dot)com> wrote:
>>
>> On 5/24/13 9:21 AM, Robert Haas wrote:
>>
>>> But I wonder if we wouldn't be better off coming up with a little more
>>> user-friendly API. Instead of exposing a cost delay, a cost limit,
>>> and various charges, perhaps we should just provide limits measured in
>>> KB/s, like dirty_rate_limit = <amount of data you can dirty per
>>> second, in kB> and read_rate_limit = <amount of data you can read into
>>> shared buffers per second, in kB>.
>>
>>
>> I already made and lost the argument for doing vacuum in KB/s units, so I
>> wasn't planning on putting that in the way of this one.
>
>
> I think the problem is that making that change would force people to relearn
> something that was already long established, and it was far from clear that
> the improvement, though real, was big enough to justify forcing people to do
> that. That objection would not apply to a new feature, as there would be
> nothing to re-learn. The other objection was that (at that time) we had
> some hope that the entire workings would be redone for 9.3, and it seemed
> unfriendly to re-name things in 9.2 without much change in functionality,
> and then redo them completely in 9.3.
Right. Also, IIRC, the limits didn't really mean what they purported
to mean. You set either a read or a dirty rate in KB/s, but what was
really limited was the combination of the two, and the relative
importance of the two factors was based on other settings in a
severely non-obvious way.
If we can see our way clear to ripping out the autovacuum costing
stuff and replacing them with a read rate limit and a dirty rate
limit, I'd be in favor of that. The current system limits the linear
combination of those with user-specified coefficients, which is more
powerful but less intuitive. If we need that, we'll have to keep it
the way it is, but I'm hoping we don't.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Heikki Linnakangas | 2013-06-06 20:09:39 | Re: Redesigning checkpoint_segments |
Previous Message | Jeff Janes | 2013-06-06 19:57:02 | Re: Redesigning checkpoint_segments |