Re: Bad pg_dump error message

From: Jeff Janes <jeff(dot)janes(at)gmail(dot)com>
To: Mike Christensen <mike(at)kitchenpc(dot)com>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: Bad pg_dump error message
Date: 2012-09-11 02:42:21
Message-ID: CAMkU=1xd2PLR8KkRnmGT45=68rDdjvU2cK6gvMmK_2ChdOMJEw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Mon, Sep 10, 2012 at 5:27 PM, Mike Christensen <mike(at)kitchenpc(dot)com> wrote:
> Is there something that can be done smarter with this error message?
>
>
> pg_dump: dumping contents of table pages
> pg_dump: [tar archiver] archive member too large for tar format
> pg_dump: *** aborted because of error

Maybe it could tell you what the maximum allowed length is, for future
reference.

>
> If there's any hard limits (like memory, or RAM) that can be checked
> before it spends two hours downloading the data,

There is no efficient way for it to know for certain in advance how
much space the data will take, until it has seen the data. Perhaps it
could make an estimate, but that could suffer from both false
positives and false negatives.

The docs for pg_dump do mention it has a 8GB limit for individual
tables. I don't see how much more than that warning can reasonably be
done. It looks like it dumps an entire table to a temp file first, so
I guess it could throw the error at the point the temp file exceeds
that size, rather than waiting for the table to be completely dumped
and then attempted to be added to the archive. But that would break
modularity some, and still you could have dumped 300 7.5GB tables
before getting to that 8.5GB one which causes the error.

Cheers,

Jeff

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Mike Christensen 2012-09-11 04:43:16 Where do I get pgAdmin 1.16 for openSuSE?
Previous Message Mike Christensen 2012-09-11 00:27:54 Bad pg_dump error message