| From: | David Ford <david(at)blue-labs(dot)org> |
|---|---|
| To: | pgsql-general(at)postgresql(dot)org |
| Subject: | Problem w/ dumping huge table and no disk space |
| Date: | 2001-09-07 20:52:34 |
| Message-ID: | 3B993392.1000809@blue-labs.org |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-general |
Help if you would please :)
I have a 10million+ row table and I've only got a couple hundred megs
left. I can't delete any rows, pg runs out of disk space and crashes.
I can't pg_dump w/ compressed, the output file is started, has the
schema and a bit other info comprising about 650 bytes, runs for 30
minutes and pg runs out of disk space and crashes. My pg_dump cmd is:
"pg_dump -d -f syslog.tar.gz -F c -t syslog -Z 9 syslog".
I want to dump this database (entire pgsql dir is just over two gigs)
and put it on another larger machine.
I can't afford to lose this information, are there any helpful hints?
I'll be happy to provide more information if desired.
David
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Alvaro Herrera | 2001-09-07 21:05:22 | Re: What Is The Firing Order? |
| Previous Message | Tom Lane | 2001-09-07 20:37:14 | Re: moving char() to varchar() |