Re: large table vacuum issues

From: "Scott Marlowe" <scott(dot)marlowe(at)gmail(dot)com>
To: "Ed L(dot)" <pgsql(at)bluepolka(dot)net>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: large table vacuum issues
Date: 2008-01-05 01:21:13
Message-ID: dcc563d10801041721s54d36a65vf4f2d499f3430a7f@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Jan 4, 2008 6:38 PM, Ed L. <pgsql(at)bluepolka(dot)net> wrote:
> We need some advice on how to handle some large table autovacuum
> issues. One of our 8.1.2

First of all, update your 8.1 install to 8.1.10. Failing to keep up
with bug fixes is negligent. who knows, you might be getting bitten
by a bug that was fixed between 8.1.2 and 8.1.10

> autovacuums is launching a DB-wide
> vacuum on our 270GB database to prevent xid wrap-around, but is
> getting hung-up and/or bogged down for hours on a 40gb table and
> taking the server performance down with it, apparently due to an
> IO bottleneck.

Have you tried adjusting the

#vacuum_cost_delay = 0 # 0-1000 milliseconds
#vacuum_cost_page_hit = 1 # 0-10000 credits
#vacuum_cost_page_miss = 10 # 0-10000 credits
#vacuum_cost_page_dirty = 20 # 0-10000 credits
#vacuum_cost_limit = 200 # 0-10000 credits

settings to something so as to make vacuum less intrusive? might be
the easiest fix.

> Are there any other tricks to get it past this large table for
> the time being and still get the xid wraparound fix?

the other trick would be to do a dump / restore of your whole db,
which can often be quicker than vacuuming it if it's got a lot of dead
tuples in it.

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Ed L. 2008-01-05 01:29:09 Re: large table vacuum issues
Previous Message ProfKheel 2008-01-05 00:54:51 Create Index (Hash) on a Large Table Taking Days...