From: | Toru SHIMOGAKI <shimogaki(dot)toru(at)oss(dot)ntt(dot)co(dot)jp> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Dan Gorman <dgorman(at)hi5(dot)com> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: PITR Backups |
Date: | 2007-06-22 02:30:49 |
Message-ID: | 467B3459.9030808@oss.ntt.co.jp |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Tom Lane wrote:
> Dan Gorman <dgorman(at)hi5(dot)com> writes:
>> All of our databases are on NetApp storage and I have been looking
>> at SnapMirror (PITR RO copy ) and FlexClone (near instant RW volume
>> replica) for backing up our databases. The problem is because there
>> is no write-suspend or even a 'hot backup mode' for postgres it's
>> very plausible that the database has data in RAM that hasn't been
>> written and will corrupt the data.
> Alternatively, you can use a PITR base backup as suggested here:
> http://www.postgresql.org/docs/8.2/static/continuous-archiving.html
I think Dan's problem is important if we use PostgreSQL to a large size database:
- When we take a PITR base backup with hardware level snapshot operation
(not filesystem level) which a lot of storage vender provide, the backup data
can be corrupted as Dan said. During recovery we can't even read it,
especially if meta-data was corrupted.
- If we don't use hardware level snapshot operation, it takes long time to take
a large backup data, and a lot of full-page-written WAL files are made.
So, I think users need a new feature not to write out heap pages during taking a
backup.
Any comments?
Best regards,
--
Toru SHIMOGAKI<shimogaki(dot)toru(at)oss(dot)ntt(dot)co(dot)jp>
NTT Open Source Software Center
From | Date | Subject | |
---|---|---|---|
Next Message | Joshua D. Drake | 2007-06-22 03:10:30 | Re: PITR Backups |
Previous Message | Tom Lane | 2007-06-22 01:51:11 | Re: Data transfer very slow when connected via DSL |