From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Greg Smith <greg(at)2ndquadrant(dot)com> |
Cc: | Craig Ringer <craig(at)postnewspapers(dot)com(dot)au>, Eduardo Piombino <drakorg(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, pgsql-performance(at)postgresql(dot)org |
Subject: | Re: a heavy duty operation on an "unused" table kills my server |
Date: | 2010-01-16 05:18:06 |
Message-ID: | 29052.1263619086@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Greg Smith <greg(at)2ndquadrant(dot)com> writes:
> You might note that only one of these sources--a backend allocating a
> buffer--is connected to the process you want to limit. If you think of
> the problem from that side, it actually becomes possible to do something
> useful here. The most practical way to throttle something down without
> a complete database redesign is to attack the problem via allocation.
> If you limited the rate of how many buffers a backend was allowed to
> allocate and dirty in the first place, that would be extremely effective
> in limiting its potential damage to I/O too, albeit indirectly.
This is in fact exactly what the vacuum_cost_delay logic does.
It might be interesting to investigate generalizing that logic
so that it could throttle all of a backend's I/O not just vacuum.
In principle I think it ought to work all right for any I/O-bound
query.
But, as noted upthread, this is not high on the priority list
of any of the major developers.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Greg Smith | 2010-01-16 09:09:26 | Re: a heavy duty operation on an "unused" table kills my server |
Previous Message | Greg Smith | 2010-01-16 04:43:55 | Re: a heavy duty operation on an "unused" table kills my server |