From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Greg Smith <greg(at)2ndquadrant(dot)com> |
Cc: | Jeff Janes <jeff(dot)janes(at)gmail(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Cost limited statements RFC |
Date: | 2013-06-07 14:14:29 |
Message-ID: | CA+TgmoYGQJdAU6CbMDE50PtBrzrb+uuAiVxBMG045XGCe_4o_A@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Thu, Jun 6, 2013 at 7:36 PM, Greg Smith <greg(at)2ndquadrant(dot)com> wrote:
> I have also subjected some busy sites to a field test here since the
> original discussion, to try and nail down if this is really necessary. So
> far I haven't gotten any objections, and I've seen one serious improvement,
> after setting vacuum_cost_page_hit to 0. The much improved server is the
> one I'm showing here. When a page hit doesn't cost anything, the new
> limiter on how fast vacuum can churn through a well cached relation usually
> becomes the CPU speed of a single core. Nowadays, you can peg any single
> core like that and still not disrupt the whole server.
Check. I have no trouble believing that limit is hurting us more than
it's helping us.
> If the page hit limit goes away, the user with a single core server who used
> to having autovacuum only pillage shared_buffers at 78MB/s might complain
> that if it became unbounded.
Except that it shouldn't become unbounded, because of the ring-buffer
stuff. Vacuum can pillage the OS cache, but the degree to which a
scan of a single relation can pillage shared_buffers should be sharply
limited.
> Buying that it's OK to scrap the hit limit leads toward a simple to code
> implementation of read/write rate limits implemented like this:
>
> -vacuum_cost_page_* are removed as external GUCs. Maybe the internal
> accounting for them stays the same for now, just to keep the number of
> changes happening at once easier.
>
> -vacuum_cost_delay becomes an internal parameter fixed at 20ms. That's
> worked out OK in the field, there's not a lot of value to a higher setting,
> and lower settings are impractical due to the effective 10ms lower limit on
> sleeping some systems have.
>
> -vacuum_cost_limit goes away as an external GUC, and instead the actual cost
> limit becomes an internal value computed from the other parameters. At the
> default values the value that pops out will still be close to 200. Not
> messing with that will keep all of the autovacuum worker cost splitting
> logic functional.
I think you're missing my point here, which is is that we shouldn't
have any such things as a "cost limit". We should limit reads and
writes *completely separately*. IMHO, there should be a limit on
reading, and a limit on dirtying data, and those two limits should not
be tied to any common underlying "cost limit". If they are, they will
not actually enforce precisely the set limit, but some other composite
limit which will just be weird.
IOW, we'll need new logic to sleep when we exceed either the limit on
read-rate OR when we exceed the limit on dirty-rate. The existing
smushed-together "cost limit" should just go away entirely.
If you want, I can mock up what I have in mind. I am pretty sure it
won't be very hard.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Haas | 2013-06-07 14:15:24 | Re: Regarding GIN Fast Update Technique |
Previous Message | Robert Haas | 2013-06-07 14:04:15 | Re: extensible external toast tuple support & snappy prototype |