From: | Eugeny N Dzhurinsky <bofh(at)redwerk(dot)com> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Subject: | managing database with thousands of tables |
Date: | 2006-07-05 13:07:03 |
Message-ID: | 20060705130703.GA2428@office.redwerk.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Hello!
I facing some strange problems with PostgreSQL 8.0 performance.
I have application which handles a lot of tasks, each task is keps in separate
table. Those tables are dropped and created again periodically (precisely -
when new task results came back from remote server). Also each table can have
hundreds of thousands records inside (but mostly they do have just few
thousands).
Sometimes I facing performance loss when working with database, and aafter I
performed vacuuming on entire database, i saw some tables and indexes in pg_*
schemas were optimized and hundreds of thousands records were deleted. Could
that be the reason of performance loss, and if so - how can I fix that?
I have pg_autovacuum up and running all the time
pg_autovacuum -d 3 -D -L /dev/null
but it seems pg_autovacuum does not do vacuuming on system tables.
--
Eugene Dzhurinsky
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2006-07-05 13:39:31 | Re: managing database with thousands of tables |
Previous Message | Gregory S. Williamson | 2006-07-05 11:06:23 | Re: Is postgresql ca do the job for software deployed in |