From: | Jeff <threshar(at)torgo(dot)978(dot)org> |
---|---|
To: | "Dan Langille" <dan(at)langille(dot)org> |
Cc: | Jonathan Gardner <jgardner(at)jonathangardner(dot)net>, pgsql-advocacy(at)postgresql(dot)org |
Subject: | Re: PostgreSQL vs MySQL |
Date: | 2004-05-24 18:32:26 |
Message-ID: | B46E4954-ADB0-11D8-B225-000D9366F0C4@torgo.978.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-advocacy |
On May 21, 2004, at 3:36 PM, Dan Langille wrote:
> On 21 May 2004 at 11:10, Jonathan Gardner wrote:
>
>> But local backups -- that's just weird. I've seen backups being made
>> locally, but then moved off the server on to some other data storage
>> device (hard disk, tape drive, CD ROM) on another server.
>
> Yes, that is what I'm talking about.
>
Someone could likely and easily write a script that is fed input from
pg_dump and is smart enough to "chunk" things out. (ie, hit 30gigs,
write a tape, rinse, repeat)
One area though is fast recovery - Reloading a multi-GB db from a
pg_dump is painful, especially if you have foreign keys. Lots of
sort_mem helps.
My plan for our informix->pg migration is to take advantage of the
LVM's snapshot feature. Make a snapshot, backup the raw data. This
way the time to recovery is simply how long it takes to load the backed
up data onto the server. No waiting for indexes & FK's. It will use
more space on the backup media. But that is the price you need to pay.
To PG it looks like a power failure or some other failure.
--
Jeff Trout <jeff(at)jefftrout(dot)com>
http://www.jefftrout.com/
http://www.stuarthamm.net/
From | Date | Subject | |
---|---|---|---|
Next Message | Josh Berkus | 2004-05-24 19:50:13 | Finally covered. |
Previous Message | Robert Bernier | 2004-05-24 18:05:40 | Re: have you seen this? |