From: | brianb-pggeneral(at)edsamail(dot)com |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Cc: | paul(at)edsamail(dot)com |
Subject: | Backup/dump of huge tables and performance |
Date: | 2000-07-28 02:26:36 |
Message-ID: | 20000728022636.22916.qmail@mail01.edsamail.com.ph |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hello,
I have a large table, the datafiles of which are already several GB in
size. It is no longer being used in transactions. I would like to back them
up to migrate them to another server, but there are several considerations:
1. pg_dump or COPY take up a lot of I/O resources. That's not surprising
considering the size of the tables, but the impact on the overall
production system's performance is not acceptable.
2. I don't know how to estimate the size of the resulting dump files from
the database files. Even running a SELECT COUNT(*) on the table slows the
entire system down considerably, so I don't know how many rows there are
(last I checked, it was a couple million, and it's definitely in the tens
of millions). What happens when I hit Linux's 2GB filesize limit?
I would very much prefer to have the backup files in little 10-20MB chunks,
rather than one humongous dumpfile. I also want to be able to run the
backup without shutting down the service that uses the database.
As noted above, this particular table is no longer being used in
transactions. I figure I could write a C program to declare a cursor and
write it out in n-MB or m-thousand row chunks, with rest pauses in between.
Any better ideas? Would this be a good feature to incorporate into future
versions of pg_dump?
Brian
--
Brian Baquiran <brianb(at)edsamail(dot)com>
http://www.baquiran.com/ AIM: bbaquiran
Work: +63(2)7182222 Home: +63(2) 9227123
From | Date | Subject | |
---|---|---|---|
Next Message | Thomas Lockhart | 2000-07-28 02:33:01 | Re: Re: [GENERAL] Some questions on user defined types and functions. |
Previous Message | Thomas Lockhart | 2000-07-28 02:14:21 | Re: 4 billion record limit? |