From: | Sven Willenberger <sven(at)dmv(dot)com> |
---|---|
To: | Postgresql Performance list <pgsql-performance(at)postgresql(dot)org> |
Subject: | Inheritence versus delete from |
Date: | 2005-02-28 23:59:13 |
Message-ID: | 1109635153.4620.44.camel@lanshark.dmv.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Trying to determine the best overall approach for the following
scenario:
Each month our primary table accumulates some 30 million rows (which
could very well hit 60+ million rows per month by year's end). Basically
there will end up being a lot of historical data with little value
beyond archival.
The question arises then as the best approach of which I have enumerated
three:
1) Just allow the records to accumulate and maintain constant vacuuming,
etc allowing for the fact that most queries will only be from a recent
subset of data and should be mostly cached.
2) Each month:
SELECT * INTO 3monthsago_dynamically_named_table FROM bigtable WHERE
targetdate < $3monthsago;
DELETE FROM bigtable where targetdate < $3monthsago;
VACUUM ANALYZE bigtable;
pg_dump 3monthsago_dynamically_named_table for archiving;
3) Each month:
CREATE newmonth_dynamically_named_table (like mastertable) INHERITS
(mastertable);
modify the copy.sql script to copy newmonth_dynamically_named_table;
pg_dump 3monthsago_dynamically_named_table for archiving;
drop table 3monthsago_dynamically_named_table;
Any takes on which approach makes most sense from a performance and/or
maintenance point of view and are there other options I may have missed?
Sven Willenberger
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2005-03-01 01:07:20 | Re: Inheritence versus delete from |
Previous Message | Tom Lane | 2005-02-28 22:15:22 | Re: High end server and storage for a PostgreSQL OLTP system |