From: | Ken(dot)Colson(at)sage(dot)com |
---|---|
To: | rob(dot)kirkbride(at)gmail(dot)com |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Data Warehousing |
Date: | 2007-09-04 14:06:07 |
Message-ID: | 057BEBE753E8AD4D8F202B35578D8169057C2B48@gnvalex004-backup.mmrd.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
>I am on a Linux platform but I'm going to need some pointers regarding
>the cron job. Are you suggesting that I parse the dump file? I assume I
>would need to switch to using inserts and then parse the dump looking
>for where I need to start from?
Something that you may want to consider is dblink from contrib. We have a
similar situation for the archiving of collected data and have been able to
implement a fairly easy solution that does not require the parsing of dump
files, just a simple(ish) query based on the time inserted.
-Ken
---------------------------(end of broadcast)---------------------------
TIP 2: Don't 'kill -9' the postmaster
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2007-09-04 14:06:47 | Re: Statistics collection question |
Previous Message | Alvaro Herrera | 2007-09-04 12:45:50 | Re: Suggestion for new function on pg_catalog: get_config() |