From: | Scott Marlowe <smarlowe(at)g2switchworks(dot)com> |
---|---|
To: | DANTE Alexandra <Alexandra(dot)Dante(at)bull(dot)net> |
Cc: | pgsql general <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: VACUUM and fsm_max_pages |
Date: | 2006-07-07 16:30:37 |
Message-ID: | 1152289837.22269.9.camel@state.g2switchworks.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Fri, 2006-07-07 at 01:57, DANTE Alexandra wrote:
> Good morning List,
>
> I have seen several posts on this concept but I don’t find a complete
> response.
> I’m using BenchmarkSQL to evaluate PostgreSQL in transaction processing
> and I work with PostgreSQL 8.1.3 on RHEL4-AS, Itanium-2 processor, 8GB RAM.
>
> The database, generated via BenchmarkSQL and used, is a 200-warehouses
> database and its size is about 20GB. The parameter “max_fsm_pages” is
> equal to 20000 and “max_fsm_relations” to 1000.
>
> Between two benchs, I launch a VACUUM but at the end of it, I see that
> PostgreSQL asks me to increase the “max_fsm_pages” parameters and the
> value proposed grows with the number of VACUUM launched…
Oh, and if you can backup your database and import it into a test
server, see how much smaller your new data/base directory is over the
one on your production server. That'll give you an idea of how bloated
your database is. 10 to 30% larger is fine. 100 to 1000% larger is
bad. You get the idea.
From | Date | Subject | |
---|---|---|---|
Next Message | Chander Ganesan | 2006-07-07 16:37:34 | Re: How to optimize query that concatenates strings? |
Previous Message | Scott Marlowe | 2006-07-07 16:12:18 | Re: duplicated values on primary key field on reindex |