From: | Steve Crawford <scrawford(at)pinpointresearch(dot)com> |
---|---|
To: | "Decibel!" <decibel(at)decibel(dot)org> |
Cc: | "Joey K(dot)" <pguser(at)gmail(dot)com>, pgsql-general(at)postgresql(dot)org |
Subject: | Re: Seeking datacenter PITR backup suggestions |
Date: | 2007-08-28 18:05:52 |
Message-ID: | 46D46400.70402@pinpointresearch.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
> In general, your handling of WAL files seems fragile and error-prone....
Indeed. I would recommend simply using rsync to handle pushing the
files. I see several advantages:
1. Distributed load - you aren't copying a full-day of files all at once.
2. Very easy to set-up - you can use it directly as your archive_command
if you wish.
3. Atomic. Rsync copies new data to a temporary location that will only
be moved into place when the transfer is complete. The destination
server will never see a partial file. Depending on the FTP client/server
combo, you will likely end up with a partial file in the event of
communication failure.
4. Much more up-to-the-minute recovery data.
In your scenario, what about using "cp -l" (or "ln") instead? Since the
hard-link it is only creating a new pointer, it will be very fast and
save a bunch of disk IO on your server and it doesn't appear that the
tempdir is for much other than organizing purposes anyway.
I'm setting up some test machines to learn more about PITR and warm
backups and am considering a two-stage process using "cp -l" to add the
file to the list needing transfer and regular rsync to actually move the
files to the destination machine. (The destination machine will be over
a WAN link so I'd like to avoid having PG tied up waiting for each rsync
to complete.)
Cheers,
Steve
From | Date | Subject | |
---|---|---|---|
Next Message | Josh Trutwin | 2007-08-28 19:11:21 | Re: Indexing Foreign Key Columns |
Previous Message | Gavin M. Roy | 2007-08-28 18:05:34 | Re: Turning off atime on PostgreSQL DB volumes |