From: | Philip Warner <pjw(at)rhyme(dot)com(dot)au> |
---|---|
To: | <nickf(at)ontko(dot)com>, <pgsql-bugs(at)postgresql(dot)org> |
Subject: | Re: pg_dump failure in tar format. |
Date: | 2003-08-01 23:37:06 |
Message-ID: | 5.1.0.14.0.20030802093535.070b0008@mail.rhyme.com.au |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-bugs |
At 02:47 PM 1/08/2003 -0500, Nick Fankhauser - Doxpop wrote:
>pg_dump: [tar archiver] could not write to tar member (wrote 39, attempted
>166)
One of the nasty features of TAR format is that it needs to know the file
size before adding it to the archive. As a result, pg_dump stores the file
in the /tmp directory before moving it to the actual output file. For huge
files, this means /tmp must be able to cope with the uncompressed size of
the largest table. It's horrible, I know, which is why I use -Fc, but I'd
guess this is the cause of your error.
It uses tmpfile() to get a temp file, so I can't see a simple way to test
this, unless you can free up 2+GB in /tmp?
Please let me know if this is the cause, and if you can not test it, I will
try to send a patch to (temporarily) avoid using tmpfile(). Ideally, I
suppose pg_dump should support the ability to override the tmpfile() location.
Bye for now,
Philip
----------------------------------------------------------------
Philip Warner | __---_____
Albatross Consulting Pty. Ltd. |----/ - \
(A.B.N. 75 008 659 498) | /(@) ______---_
Tel: (+61) 0500 83 82 81 | _________ \
Fax: (+61) 03 5330 3172 | ___________ |
Http://www.rhyme.com.au | / \|
| --________--
PGP key available upon request, | /
and from pgp5.ai.mit.edu:11371 |/
From | Date | Subject | |
---|---|---|---|
Next Message | garrick | 2003-08-02 01:40:48 | html tarball not found in doc/Makefile |
Previous Message | Tom Lane | 2003-08-01 23:31:48 | Re: PG 7.3.1 with ssl on linux hangs (testcase available) |