From: | Craig Ringer <craig(at)postnewspapers(dot)com(dot)au> |
---|---|
To: | Eduardo Piombino <drakorg(at)gmail(dot)com> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: a heavy duty operation on an "unused" table kills my server |
Date: | 2010-01-13 05:41:36 |
Message-ID: | 4B4D5D10.40806@postnewspapers.com.au |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On 13/01/2010 12:59 PM, Eduardo Piombino wrote:
> My question then is: is there a way to limit the CPU assigned to a
> specific connection?
> I mean, I don't care if my ALTER TABLE takes 4 days instead of 4 hours.
>
> Something like:
> pg_set_max_cpu_usage(2/100);
You're assuming the issue is CPU. I think that unlikely. In general, a
single thread/process that wants as much CPU as it can get won't bring
any machine with a half-decent OS to its knees. Any UNIX system should
barely notice - everything else will slow down somewhat, depending on
its scheduler, but in any sane setup shouldn't slow down by more than
1/2. Modern Windows tends to be fairly well behaved here too.
What's much more likely is that you're working with a crappy disk setup
- such as a RAID 5 array without battery-backed cache, or a single slow
disk. You probably also have quite deep write queuing in the RAID
controller / disk / OS. This means that your disk-intensive ALTER TABLE
makes your disk subsystem so busy that it takes ages before any other
process gets a look-in. It's not unlikely that I/O requests are being
queued so deeply that it (often) takes several seconds for the
controller to get around to executing a newly submitted read or write
request. If your other queries need to do more than a few steps where
they read some data, think about it, and read other data depending on
the first read, then they're going to take forever, because they're
going to have to ensure a long delay before disk access each time.
Of course, that's just a guess, since you've provided no information on
your hardware. Try collecting up some of the information shown here:
http://wiki.postgresql.org/wiki/Guide_to_reporting_problems#Things_you_need_to_mention
In any case, if it *is* I/O related, what to do about it depends on
exactly what sort of I/O issue it is. Extremely deep queuing? Looks good
for throughput benchmarks, but is stupid if you care about latency and
have some I/O that's higher priority than others, so reduce your queue
depth. Very slow writes hammering reads? Don't use RAID 5. Etc.
--
Craig Ringer
From | Date | Subject | |
---|---|---|---|
Next Message | Eduardo Piombino | 2010-01-13 05:47:30 | Re: a heavy duty operation on an "unused" table kills my server |
Previous Message | Craig James | 2010-01-13 05:14:00 | Re: a heavy duty operation on an "unused" table kills my server |