From: | "Scott Marlowe" <scott(dot)marlowe(at)gmail(dot)com> |
---|---|
To: | "Rob Kirkbride" <rob(dot)kirkbride(at)gmail(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Data Warehousing |
Date: | 2007-09-03 07:52:45 |
Message-ID: | dcc563d10709030052w16d02d8esce2be0adfd129388@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On 9/3/07, Rob Kirkbride <rob(dot)kirkbride(at)gmail(dot)com> wrote:
> Hi,
>
> I've got a postgres database collected logged data. This data I have to keep
> for at least 3 years. The data in the first instance is being recorded in a
> postgres cluster. This then needs to be moved a reports database server for
> analysis. Therefore I'd like a job to dump data on the cluster say every
> hour and record this is in the reports database. The clustered database
> could be purged of say data more than a week old.
>
> So basically I need a dump/restore that only appends new data to the reports
> server database.
>
> I've googled but can't find anything, can anyone help?
You might find an answer in partitioning your data. There's a section
in the docs on it. Then you can just dump the old data from the
newest couple of partitions if you're partitioning by week, and dump
anything older with a simple delete where date < now() - interval '1
week' or something like that.
From | Date | Subject | |
---|---|---|---|
Next Message | Andrej Ricnik-Bay | 2007-09-03 07:59:02 | Re: Data Warehousing |
Previous Message | Rob Kirkbride | 2007-09-03 07:38:38 | Data Warehousing |