From: | "Scott Marlowe" <scott(dot)marlowe(at)gmail(dot)com> |
---|---|
To: | "Dan Armbrust" <daniel(dot)armbrust(dot)list(at)gmail(dot)com> |
Cc: | "Simon Riggs" <simon(at)2ndquadrant(dot)com>, "pgsql general" <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Slow Vacuum was: vacuum output question |
Date: | 2008-12-30 16:47:17 |
Message-ID: | dcc563d10812300847y188427ffh374a96b853ab417d@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Tue, Dec 30, 2008 at 9:32 AM, Dan Armbrust
<daniel(dot)armbrust(dot)list(at)gmail(dot)com> wrote:
> Haven't looked at that yet on this particular system. Last time, on
> different hardware when this occurred the vmstat 'wa' column showed
> very large values while vacuum was running. I don't recall what the
> bi/bo columns indicated.
definitely sounds like poor io
> top also showed very high load averages while vacuum was running - but
> basically no cpu use.
yeah, load is the number of things running or waiting to run. If
vacuum is sucking up all the io, and this machine doesn't have much io
capability, then it's quite possible for other processes stuck behind
it to crank up the load factor.
Also, were the any vacuum cost delay settings over 0 on this machine
when the test was run?
> Are there any common tools that could do a better disk benchmark than
> hdparm -Tt?
Keep in mind, hdparm hits the drive directly, not through the
filesystem. I use bonnie++ or iozone to test io.
From | Date | Subject | |
---|---|---|---|
Next Message | Scott Marlowe | 2008-12-30 16:47:37 | Re: Slow Vacuum was: vacuum output question |
Previous Message | Dan Armbrust | 2008-12-30 16:37:04 | Re: Slow Vacuum was: vacuum output question |