From: | Gaetano Mendola <mendola(at)bigfoot(dot)com> |
---|---|
To: | Sven Willenberger <sven(at)dmv(dot)com> |
Subject: | Re: Inheritence versus delete from |
Date: | 2005-03-02 00:56:52 |
Message-ID: | 42250F54.8060508@bigfoot.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Sven Willenberger wrote:
> Trying to determine the best overall approach for the following
> scenario:
>
> Each month our primary table accumulates some 30 million rows (which
> could very well hit 60+ million rows per month by year's end). Basically
> there will end up being a lot of historical data with little value
> beyond archival.
>
> The question arises then as the best approach of which I have enumerated
> three:
>
> 1) Just allow the records to accumulate and maintain constant vacuuming,
> etc allowing for the fact that most queries will only be from a recent
> subset of data and should be mostly cached.
>
> 2) Each month:
> SELECT * INTO 3monthsago_dynamically_named_table FROM bigtable WHERE
> targetdate < $3monthsago;
> DELETE FROM bigtable where targetdate < $3monthsago;
> VACUUM ANALYZE bigtable;
> pg_dump 3monthsago_dynamically_named_table for archiving;
In my experience copy/delete in a single transaction 60+ million rows
is not feseable, at least on my 1 GB ram, 2 way CPU box.
Regards
Gaetano Mendola
From | Date | Subject | |
---|---|---|---|
Next Message | Josh Berkus | 2005-03-02 01:11:18 | Re: multi billion row tables: possible or insane? |
Previous Message | Dan Harris | 2005-03-02 00:44:07 | Confusion about locales and 'like' indexes |