From: | Edgardo Portal <egportal2002(at)yahoo(dot)com> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: best practice in archiving CDR data |
Date: | 2010-03-29 14:08:23 |
Message-ID: | hoqc8n$ieb$1@news.eternal-september.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On 2010-03-29, Juan Backson <juanbackson(at)gmail(dot)com> wrote:
> --0016e64ccb10fb54050482f07924
> Content-Type: text/plain; charset=ISO-8859-1
>
> Hi,
>
> I am using Postgres to store CDR data for voip switches. The data size
> quickly goes about a few TBs.
>
> What I would like to do is to be able to regularly archive the oldest data
> so only the most recent 6 months of data is available.
>
> All those old data will be stored in a format that can be retrieved back
> either into DB table or flat files.
>
> Does anyone know how should I go about doing that? Is there any existing
> tool that can already do that?
>
> thanks,
> jb
FWIW, I partition by ISO week, use INSERT RULEs to route CDRs to the correct
partition (keeping about 3 partitions "open" to new CDRs at any one time),
use pg_dump to archive partition tables to off-line storage, and
DROP TABLE to keep the main DBs to about 40 weeks of data. I used
to use monthly partitioning, but the file sizes got a bit awkward
to deal with.
When I need to restore old CDRs (e.g. to service a subpoena) I
use pg_restore to load the needed CDRs to a throwaway database
and process as necessary.
From | Date | Subject | |
---|---|---|---|
Next Message | Juan Backson | 2010-03-29 14:18:13 | Re: best practice in archiving CDR data |
Previous Message | Oliver Kohll - Mailing Lists | 2010-03-29 13:47:57 | Re: [pgsql-general] looking for a powerful frontend/teport generator |