Re: [HACKERS] pg_dump core dump, upgrading from 6.5b1 to 5/24 snapshot

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Ari Halberstadt <ari(at)shore(dot)net>
Cc: pgsql-hackers(at)postgreSQL(dot)org
Subject: Re: [HACKERS] pg_dump core dump, upgrading from 6.5b1 to 5/24 snapshot
Date: 1999-05-26 18:45:54
Message-ID: 13778.927744354@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Ari Halberstadt <ari(at)shore(dot)net> writes:
> Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> noted that MAXQUERYLEN's value in pg_dump is
> 5000. Some of my fields are the maximum length for a text field.

There are two bugs here: dumpClasses_dumpData() should not be making any
assumption at all about the maximum size of a tuple field, and pg_dump's
value for MAXQUERYLEN ought to match the backend's. I hadn't realized
that it wasn't using the same query buffer size as the backend does ---
this might possibly explain some other complaints we've seen about being
unable to dump complex table or rule definitions.

Will fix both problems this evening.

> The dumped data file is 15MB (no -d or -D option) or 22MB (with -D). The
> core file is 13.8MB, which sounds like a memory leak in pg_dump.

Not necessarily --- are the large text fields in a multi-megabyte table?
When you're using -D, pg_dump just does a "SELECT * FROM table" and then
iterates through the returned result, which must hold the whole table.
(This is another reason why I prefer not to use -d/-D ... the COPY
method doesn't require buffering the whole table inside pg_dump.)

Some day we should enhance libpq to allow a select result to be received
and processed in chunks smaller than the whole result.

regards, tom lane

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Bruce Momjian 1999-05-26 19:49:18 NUMERIC regression test?
Previous Message Ari Halberstadt 1999-05-26 18:12:43 Re: [HACKERS] pg_dump core dump, upgrading from 6.5b1 to 5/24 snapshot