| From: | "Ed L(dot)" <pgsql(at)bluepolka(dot)net> |
|---|---|
| To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
| Cc: | pgsql-general(at)postgresql(dot)org |
| Subject: | Re: Interpreting vacuum verbosity |
| Date: | 2004-05-10 17:37:28 |
| Message-ID: | 200405101137.28730.pgsql@bluepolka.net |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-general |
On Friday May 7 2004 12:48, Tom Lane wrote:
> "Ed L." <pgsql(at)bluepolka(dot)net> writes:
> > 2) Would this low setting of 10000 explain the behavior we saw of
> > seqscans of a perfectly analyzed table with 1000 rows requiring
> > ridiculous amounts of time even after we cutoff the I/O load?
>
> Possibly. The undersized setting would cause leakage of disk space
> (that is, new rows get appended to the end of the table even when space
> is available within the table, because the system has "forgotten" about
> that space due to lack of FSM slots to remember it in). If the physical
> size of the table file gets large enough, seqscans will take a long time
> no matter how few live rows there are. I don't recall now whether your
> VACUUM VERBOSE results showed that the physical table size (number of
> pages) was out of proportion to the actual number of live rows. But it
> sure sounds like that might have been the problem.
If it were indeed the case that we'd leaked a lot of diskspace, then after
bumping max_fsm_pages up to a much higher number (4M), will these pages
gradually be "remembered" as they are accessed by autovac and or queried,
etc? Or is a dump/reload or 'vacuum full' the only way? Trying to avoid
downtime...
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Ed L. | 2004-05-10 17:40:25 | Re: Interpreting vacuum verbosity |
| Previous Message | Ivan Sergio Borgonovo | 2004-05-10 17:30:07 | nested elseif woes |