From: | "Andrus" <eetasoft(at)online(dot)ee> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: How to implement backup protocol |
Date: | 2006-11-28 16:01:43 |
Message-ID: | ekhmhs$1g6o$1@news.hub.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
> The weekly backup of the larger of the two databases produces a file that
> is about 20GB and takes about an hour and 15 minutes. I then compress it
> down to about 4 GB, which takes another hour. However, because that's a
> separate task, it doesn't impact the database server as much. (I suspect
> all that I/O slows things down a bit, but I haven't noticed any
> significant effect in my transaction time reports. That task is run during
> the slowest 4 hour period of the week, though).
My environment is a bit different. For safety, I need to create backups to
separate computer over over internet.
1. Backup computer has client category internet connection (ADSL, 600 KBit/s
download speed).
2. Query "SELECT sum( relpages * 8/1000) FROM pg_class" returns 1302 for
a database restored from backup.
So my data size seems to be approx 1 GB only.
3. Backup file size is 70 M
4. Backup client has all ports closed.
5. Server has *only* 5432 port open.
So I think that 4.5 hours which requires to create backup is because pg_dump
download the whole database (1 GB) in uncompressed format over slow
internet connection.
Compression level does not affect to this almost at all.
I think I can create backup copy fast in server computer but how to send it
to backup computer?
pg_read_file() can read only text files and is restricted only to
superusers.
How to add a function pg_read_backup() to Postgres which creates and
returns backup file with download speed ?
This problably requires implementing some file download protocol.
> BTW, if you've never actually tested your recovery capabilities, can you
> be sure they work?
> I did a full-blown test in February or March and found a few loose ends.
> And when we had to do the real thing in May (due to a power supply
> failure), there were STILL a few loose ends, but we were back online
> within 12 hours of when I started the recovery process, and half of that
> time was spent completing the setup of the 'backup' server, which I had
> been rebuilding. I'm working to lower that downtime and will be doing
> another full-blown test in January or February.
I expect that full database backup created using pd_dump does not have never
have any problems on restore.
Andrus.
From | Date | Subject | |
---|---|---|---|
Next Message | Tony Caduto | 2006-11-28 16:05:51 | Re: Development of cross-platform GUI for Open Source DBs |
Previous Message | Stephen Harris | 2006-11-28 16:00:35 | Datafiles binary portable? |