Re: pg_dump large-file support > 16GB

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Rafael Martinez Guerrero <r(dot)m(dot)guerrero(at)usit(dot)uio(dot)no>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: pg_dump large-file support > 16GB
Date: 2005-03-17 15:17:17
Message-ID: 24124.1111072637@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Rafael Martinez Guerrero <r(dot)m(dot)guerrero(at)usit(dot)uio(dot)no> writes:
> We are trying to dump a 30GB+ database using pg_dump with the --file
> option. In the beginning everything works fine, pg_dump runs and we get
> a dumpfile. But when this file becomes 16GB it disappears from the
> filesystem, pg_dump continues working without giving an error until it
> finnish (even when the file does not exist)(The filesystem has free
> space).

Is that a plain text, tar, or custom dump (-Ft or -Fc)? Is the behavior
different if you just write to stdout instead of using --file?

regards, tom lane

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Tom Lane 2005-03-17 15:49:24 Re: plpython function problem workaround
Previous Message Martijn van Oosterhout 2005-03-17 15:04:51 Re: plpython function problem workaround