From: | Alvaro Herrera <alvherre(at)atentus(dot)com> |
---|---|
To: | Bjoern Metzdorf <bm(at)turtle-entertainment(dot)de> |
Cc: | <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: db grows and grows |
Date: | 2002-06-18 16:42:24 |
Message-ID: | Pine.LNX.4.33.0206181239450.18262-100000@polluelo.lab.protecne.cl |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Tue, 18 Jun 2002, Bjoern Metzdorf wrote:
> I have a 3 GB (fs based) large pgdata directory. I regularly do vacuums
> every 15 minutes and vacuums with analyzing every night.
>
> After dumping the whole db (pg_dump -c db), dropping and creating the db,
> reinserting the dump and vacuuming again, my pgdata directory only contains
> 1 GB. The dump had no errors, all data has been saved and reinserted.
Your indexes are probably growing. In extant versions, VACUUM doesn't
shrink them, so you should do it manually using REINDEX. There's a
contributed script that does it in a reasonably automated way, like the
vacuum script does. Look in the archives.
> Shouldn't the vacuuming take care of this?
Yes IMVHO, but at present it doesn't.
--
Alvaro Herrera (<alvherre[(at)]dcc(dot)uchile(dot)cl>)
From | Date | Subject | |
---|---|---|---|
Next Message | Vivek Khera | 2002-06-18 16:48:15 | Re: Help automate pg_dump |
Previous Message | Vivek Khera | 2002-06-18 16:41:28 | Re: how to remove columns from a table |