From: | Christoph Frick <frick(at)sc-networks(dot)com> |
---|---|
To: | "Ryan D(dot) Enos" <renos(at)ucla(dot)edu> |
Cc: | pgsql-novice(at)postgresql(dot)org |
Subject: | Re: large duplicated files |
Date: | 2007-08-17 08:37:07 |
Message-ID: | 20070817083707.GJ9296@byleth.sc-networks.de |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-novice |
On Fri, Aug 17, 2007 at 12:15:13AM -0700, Ryan D. Enos wrote:
> Well, I feel like the guy who goes to the doctor and then finds the
> pain is suddenly gone when he gets there. I have discovered that my
> previously described problem was almost certainly the result of
> temporary tables that were not being dropped after a crash through an
> OBDC connection (at least I hope that's where those files were coming
> from). However, I am still curious if anybody knows how I can find
> and destroy those tables in the even of a crash?
there are lots of scripts out there (google them) to find out which
table/index actually uses up your harddisk space. in an older
postgreslql version e.g. an index went nuts and kept growing without
reason - reindexing helped here.
if you delete lots of data also be sure to vacuum the db (depending on
your version of the db) and have enough fsm configured. do a verbose
vacuum to find out if there there are enough fsm (shows up at the end of
the report).
--
cu
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2007-08-17 13:53:18 | Re: large duplicated files |
Previous Message | Ryan D. Enos | 2007-08-17 07:15:13 | Re: large duplicated files |