From: | Jeff Janes <jeff(dot)janes(at)gmail(dot)com> |
---|---|
To: | Greg Smith <greg(at)2ndquadrant(dot)com> |
Cc: | PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Vacuum rate limit in KBps |
Date: | 2012-01-29 23:29:42 |
Message-ID: | CAMkU=1yM2WxUx8TZe=2ZG-wfFwMHCh-OmeV3ZQOEBuymH4i06Q@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Sun, Jan 15, 2012 at 12:24 AM, Greg Smith <greg(at)2ndquadrant(dot)com> wrote:
> If you then turn that equation around, making the maximum write rate the
> input, for any given cost delay and dirty page cost you can solve for the
> cost limit--the parameter in fictitious units everyone hates. It works like
> this, with the computation internals logged every time they run for now:
>
> #vacuum_cost_rate_limit = 4000 # maximum write rate in kilobytes/second
> LOG: cost limit=200 based on rate limit=4000 KB/s delay=20 dirty cost=20
The computation seems to be suffering from some kind of overflow:
cost limit=50 based on rate limit=1000 KB/s delay=20 dirty cost=20
cost limit=100 based on rate limit=2000 KB/s delay=20 dirty cost=20
cost limit=150 based on rate limit=3000 KB/s delay=20 dirty cost=20
cost limit=200 based on rate limit=4000 KB/s delay=20 dirty cost=20
cost limit=250 based on rate limit=5000 KB/s delay=20 dirty cost=20
cost limit=1 based on rate limit=6000 KB/s delay=20 dirty cost=20
cost limit=1 based on rate limit=7000 KB/s delay=20 dirty cost=20
cost limit=1 based on rate limit=8000 KB/s delay=20 dirty cost=20
Cheers,
Jeff
From | Date | Subject | |
---|---|---|---|
Next Message | Alvaro Herrera | 2012-01-30 01:07:06 | Re: cursors FOR UPDATE don't return most recent row |
Previous Message | Simon Riggs | 2012-01-29 23:04:47 | Re: CLOG contention, part 2 |