From: | Bill Montgomery <billm(at)lulu(dot)com> |
---|---|
To: | Shridhar Daithankar <shridhar(at)frodo(dot)hserus(dot)net> |
Cc: | lec <limec(at)streamyx(dot)com>, pgsql-general(at)postgresql(dot)org |
Subject: | Re: Dump / restore for optimization? |
Date: | 2004-07-01 15:02:06 |
Message-ID: | 40E4276E.9090101@lulu.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Shridhar Daithankar wrote:
> lec wrote:
>
>> I always have to dump & restore to make a database (which is in use
>> for a few months) efficient again. I have scheduled nightly vacuum
>> analyze, but only dump & restore can make the database efficient
>> again. Has anyone experience this?
>
>
> You could try several things if you want to avoid dump/restore
>
>
> - Try vacuum full instead of vacuum analyze
VACUUM FULL can be undesirable due to it's locking effects. If you can
afford to be unable to write to your database for the duration of the
vacuum full, go for it.
Otherwise, make sure you have enough FSM (free space map) space. The
postgresql.conf parameters that affect it are max_fsm_relations and
max_fsm_pages. If you have tables that see a lot of UPDATE or DELETE
action throughout the day, and tiny (e.g. default) FSM settings you may
run out of space in your FSM to track "dead" tuples. When that happens,
your tables will grow and grow, and will not be fully cleaned up by a
regular vacuum. This will lead to slowly degrading performance that will
only be fixed by a full vacuum or a dump/restore.
Good Luck,
Bill Montgomery
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2004-07-01 15:04:03 | Re: Dump / restore for optimization? |
Previous Message | Thierry Missimilly | 2004-07-01 14:59:18 | Run a unix or perl command in a trigger |