| From: | Mike Baker <bakerlmike(at)yahoo(dot)com> |
|---|---|
| To: | pgsql-admin(at)postgresql(dot)org |
| Subject: | Large Dump Files |
| Date: | 2002-07-18 17:36:02 |
| Message-ID: | 20020718173602.10572.qmail@web13808.mail.yahoo.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-admin |
Hi.
I am running postgresql 7.1 on Red Hat Linux, kernel
build 2.4.2-2. I am in the process of updating
postresql to the latest version.
When I dump my database using all the compressions
tricks in:
http://www.us.postgresql.org/users-lounge/docs/7.2/postgres/backup.html#BACKUP-DUMP-LARGE
my dump file is still over 2gigs and thus the dump
fails. We have a large amount of BLOB data in the
database.
I am wondering:
will cat filename* | psql dbname work if my dump file
has large binary objects in it?
If not, does anyone have experience getting Red Hat to
deal with large files. I can find no documentation
that deals with large files for the kernel build have.
Thanks.
Mike Baker
__________________________________________________
Do You Yahoo!?
Yahoo! Autos - Get free new car price quotes
http://autos.yahoo.com
| From | Date | Subject | |
|---|---|---|---|
| Next Message | shreedhar | 2002-07-18 17:55:02 | Ident authentication problem |
| Previous Message | Elielson Fontanezi | 2002-07-18 17:23:32 | Alzabo free data-modeling |