From: | Kevin Kempter <kevin(at)kevinkempterllc(dot)com> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: significant vacuum issues - looking for suggestions |
Date: | 2007-08-27 22:03:52 |
Message-ID: | 200708271603.52740.kevin@kevinkempterllc.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Monday 27 August 2007 15:00:41 you wrote:
> On Fri, Aug 24, 2007 at 04:41:44PM -0400, Bill Moran wrote:
> > In response to Kevin Kempter <kevin(at)kevinkempterllc(dot)com>:
> > > Hi List;
> > >
> > > I've just started working with a client that has been running Postgres
> > > (with no DBA) for a few years. They're running version 8.1.4 on 4-way
> > > dell boxes with 4Gig of memory on each box attached to RAID-10 disk
> > > arrays.
> > >
> > > Some of their key config settings are here:
> > > shared_buffers = 20480
> > > work_mem = 16384
> > > maintenance_work_mem = 32758
> >
> > Before you do any of those other things, bump shared_buffers to about
> > 120000 and maintenance_work_mem to 250000 or so -- unless this box
> > has other applications on it using significant amounts of those 4G of
> > RAM. You may find that these changes alone are enough to get vacuum
> > to complete. You'll need to restart the server for the shared_buffers
> > setting to take effect.
>
> For the really bloated table, you might need to go even higher than
> 250000 for maint_work_mem. IIRC vacuum needs 6 bytes per dead tuple, so
> that means 43M rows... with 5M dead pages, that means less than 10 rows
> per page, which is unlikely. Keep in mind that if you do a vacuum
> verbose, you'll be able to see if vacuum runs out of
> maintenance_work_mem, because you'll see multiple passes through all the
> indexes.
>
> You could also potentially use this to your benefit. Set maint_work_mem
> low enough so that vacuum will have to start it's cleaning pass after
> only an hour or so... depending on how big/bloated the indexes are on
> the table, it might take another 2-3 hours to clean everything. I
> believe that as soon as you see it start on the indexes a second time
> you can kill it... you'll have wasted some work, but more importantly
> you'll have actually vacuumed part of the table.
>
> But all of that's a moot point if they're running the default free space
> map settings, which are way, way, way to conservative in 8.1. If you've
> got one table with 5M dead pages, you probably want to set fsm_pages to
> at least 50000000 as a rough guess, at least until this is under
> control. Keep in mind that does equate to 286M of memory, though.
>
> As for your pg_dump idea... why not just do a CREATE TABLE AS SELECT *
> FROM bloated_table? That would likely be much faster than messing around
> with pg_dump.
>
> What kind of disk hardware is this running on? A good raid 10 array with
> write caching should be able to handle a 200G database fairly well; at
> least better than it is from what I'm hearing.
The memory settings are way low on all their db servers (less than 170Meg for
the shared_buffers). I fixed this table via creating a new_** table, select
from insert into, and a rename.
I'm still working through the memory settings and reviewing their other config
settings, the filesystem type/settings and eventually a security audit. It's
a new client and theyve been running postgres for a few years on approx 8 db
servers with no DBA.
The servers are 4-way intel boxes (NOT dual-core) with 4G of memory and
running raid-10 arrays.
From | Date | Subject | |
---|---|---|---|
Next Message | Decibel! | 2007-08-27 22:04:39 | Re: significant vacuum issues - looking for suggestions |
Previous Message | Kevin Kempter | 2007-08-27 22:00:01 | Re: significant vacuum issues - looking for suggestions |