From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Kenji Morishige <kenjim(at)juniper(dot)net> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: optimizing db for small table with tons of updates |
Date: | 2006-04-03 18:39:10 |
Message-ID: | 15521.1144089550@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Kenji Morishige <kenjim(at)juniper(dot)net> writes:
> Various users run a tool that updates this table to determine if the particular
> resource is available or not. Within a course of a few days, this table can
> be updated up to 200,000 times. There are only about 3500 records in this
> table, but the update and select queries against this table start to slow
> down considerablly after a few days. Ideally, this table doesn't even need
> to be stored and written to the filesystem. After I run a vacuum against this
> table, the overall database performance seems to rise again.
You should never have let such a table go that long without vacuuming.
You might consider using autovac to take care of it for you. If you
don't want to use autovac, set up a cron job that will vacuum the table
at least once per every few thousand updates.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Kenji Morishige | 2006-04-03 19:02:13 | Re: optimizing db for small table with tons of updates |
Previous Message | Rajesh Kumar Mallah | 2006-04-03 18:36:45 | Re: optimizing db for small table with tons of updates |