From: | reina(at)nsi(dot)edu (Tony Reina) |
---|---|
To: | pgsql-admin(at)postgresql(dot)org |
Subject: | Re: Large Dump Files |
Date: | 2002-07-19 18:01:15 |
Message-ID: | f40d3195.0207191001.2b8f21a0@posting.google.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
Mike,
Are you sure that 'split' won't work? It is specifically designed
to break apart your files into smaller chunks:
pg_dump dbname | split -b 1000m - filename
Reload with
createdb dbname
cat filename* | psql dbname
-Tony
bakerlmike(at)yahoo(dot)com (Mike Baker) wrote in message news:<20020718173602(dot)10572(dot)qmail(at)web13808(dot)mail(dot)yahoo(dot)com>...
> Hi.
>
> I am running postgresql 7.1 on Red Hat Linux, kernel
> build 2.4.2-2. I am in the process of updating
> postresql to the latest version.
>
> When I dump my database using all the compressions
> tricks in:
> http://www.us.postgresql.org/users-lounge/docs/7.2/postgres/backup.html#BACKUP-DUMP-LARGE
>
> my dump file is still over 2gigs and thus the dump
> fails. We have a large amount of BLOB data in the
> database.
>
> I am wondering:
>
> will cat filename* | psql dbname work if my dump file
> has large binary objects in it?
>
> If not, does anyone have experience getting Red Hat to
> deal with large files. I can find no documentation
> that deals with large files for the kernel build have.
>
> Thanks.
>
> Mike Baker
>
> __________________________________________________
> Do You Yahoo!?
> Yahoo! Autos - Get free new car price quotes
> http://autos.yahoo.com
>
> ---------------------------(end of broadcast)---------------------------
> TIP 2: you can get off all lists at once with the unregister command
> (send "unregister YourEmailAddressHere" to majordomo(at)postgresql(dot)org)
From | Date | Subject | |
---|---|---|---|
Next Message | Ligia Pimentel | 2002-07-19 18:21:15 | Problem with database I need to repair... |
Previous Message | Chad R. Larson | 2002-07-19 17:38:06 | Re: Tape/DVD Backup Suggestions? |