From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | "Ed L(dot)" <pgsql(at)bluepolka(dot)net> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Interpreting vacuum verbosity |
Date: | 2004-05-10 23:18:09 |
Message-ID: | 19916.1084231089@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
"Ed L." <pgsql(at)bluepolka(dot)net> writes:
> If it were indeed the case that we'd leaked a lot of diskspace, then after
> bumping max_fsm_pages up to a much higher number (4M), will these pages
> gradually be "remembered" as they are accessed by autovac and or queried,
> etc? Or is a dump/reload or 'vacuum full' the only way? Trying to avoid
> downtime...
The next vacuum will add the "leaked" space back into the FSM, once
there's space there to remember it. You don't need to do anything
drastic, unless you observe that the amount of wasted space is so large
that a vacuum full is needed.
BTW, these days, a CLUSTER is a good alternative to a VACUUM FULL; it's
likely to be faster if the VACUUM would involve moving most of the live
data anyway.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Steve Atkins | 2004-05-10 23:40:55 | Re: nested elseif woes |
Previous Message | Nick Barr | 2004-05-10 23:09:23 | Re: Very slow query |