From: | "Charles Duffy" <charles(dot)duffy(at)gmail(dot)com> |
---|---|
To: | a(dot)maclean(at)cas(dot)edu(dot)au |
Cc: | General <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Backing up and deleting a database. |
Date: | 2008-07-15 07:07:30 |
Message-ID: | dfdaea8f0807150007t778bc041v786d647d978c4f54@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hi,
On Tue, Jul 15, 2008 at 2:52 PM, Andrew Maclean
<andrew(dot)amaclean(at)gmail(dot)com> wrote:
> We have a database that grows in size quite quickly. Of course we
> backup nightly and keep a weeks worth of data
>
> However we need to keep a few months data online, but the rest can be
> archived as it will be unlikley that it will be used again.
>
> As I see it we can:
> 1) Run a query to drop/delete old data, the downside here is that we lose it.
> 2) Stop the database (this is important because clients are writing to
> it), back it up, delete it and recreate the database. Has anyone done
> this? Do they have a script for htis?
It sounds like table partitioning could be useful in your situation,
depending on
what your data looks like, and how you want to query it. Its worth your taking
the time to read:
http://www.postgresql.org/docs/8.3/interactive/ddl-partitioning.html.
If you're basically inserting a series of observations or something to
a large table this could be useful - you can use it to increase the
amount of data you can easily manage, and to automate something like a
rolling 2-month window of online data. A script could be put together
to periodically dump out the oldest partition, drop it,
create a new partition, and maintain the associated triggers.
Charles Duffy
From | Date | Subject | |
---|---|---|---|
Next Message | Oleg Bartunov | 2008-07-15 07:11:50 | Re: Inconsistency with stemming/stop words in Tsearch2 |
Previous Message | Edoardo Panfili | 2008-07-15 06:19:25 | Re: optimizer ignoring primary key and doing sequence scan |