From: | MirrorX <mirrorx(at)gmail(dot)com> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: backup-strategies for large databases |
Date: | 2011-08-15 23:06:39 |
Message-ID: | 1313449599604-4702690.post@n5.nabble.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
i looked into data partitioning and it is definitely something we will use
soon. but, as far as the backups are concerned, how can i take a backup
incrementally? if i get it correctly, the idea is to partition a big table
(using a date field for example) and then take each night for example a dump
of the 'daily' partition. so that the dump of this specific table will be
relatively small to the size of the initial table. is that right?
so, we are talking about logical backups without PITR. i am not saying that
it's a bad idea, i just want to make sure that i got it right.
thank you again all for your answers
--
View this message in context: http://postgresql.1045698.n5.nabble.com/backup-strategies-for-large-databases-tp4697145p4702690.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.
From | Date | Subject | |
---|---|---|---|
Next Message | Scott Marlowe | 2011-08-15 23:12:30 | Re: backup-strategies for large databases |
Previous Message | Chris Travers | 2011-08-15 22:57:03 | Re: Using Postgresql as application server |