From: | Mark Stosberg <mark(at)summersault(dot)com> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Subject: | How to avoid vacuuming a huge logging table |
Date: | 2007-02-21 18:02:16 |
Message-ID: | eri1gd$29bq$1@news.hub.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Our application has a table that is only logged to, and infrequently
used for reporting. There generally no deletes and updates.
Recently, the shear size (an estimated 36 million rows) caused a serious
problem because it prevented a "vacuum analyze" on the whole database
from finishing in a timely manner.
As I understand, a table with this usage pattern wouldn't need to be
vacuumed anyway.
I'm looking for general advice from people who have faced the same
issue. I'm looking at a number of alternatives:
1. Once a month, we could delete and archive old rows, for possible
re-import later if we need to report on them. It would seem this would
need to be done as proper insert statements for re-importing. (Maybe
there is a solution for that with table partitioning? )
2. We could find a way to exclude the table for vacuuming, and let it
grow even larger. Putting the table in it's own database would
accomplish that, but it would nice to avoid the overhead of a second
database connection.
3. Take a really different approach. Log in CSV format to text files
instead, And only import the date ranges we need "on demand" if a report
is requested on the data.
Thanks for any tips.
Mark
From | Date | Subject | |
---|---|---|---|
Next Message | Dan Harris | 2007-02-21 18:33:24 | General advice on user functions |
Previous Message | Mark Stosberg | 2007-02-21 17:49:12 | Re: How to debug performance problems |