Re: Out of memory on pg_dump

From: "Chris Hopkins" <chopkins(at)cra(dot)com>
To: "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: <pgsql-general(at)postgresql(dot)org>
Subject: Re: Out of memory on pg_dump
Date: 2009-08-21 15:29:48
Message-ID: 2F740099AD5F8E4BA876BC6580B16D480182864E@server2.cra.lan
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Thanks Tom. Next question (and sorry if this is an ignorant one)...how
would I go about doing that?

- Chris





THIS MESSAGE IS INTENDED FOR THE USE OF THE PERSON TO WHOM IT IS ADDRESSED. IT MAY CONTAIN INFORMATION THAT IS PRIVILEGED, CONFIDENTIAL AND EXEMPT FROM DISCLOSURE UNDER APPLICABLE LAW. If you are not the intended recipient, your use of this message for any purpose is strictly prohibited. If you have received this communication in error, please delete the message and notify the sender so that we may correct our records.

-----Original Message-----

From: Tom Lane [mailto:tgl(at)sss(dot)pgh(dot)pa(dot)us]
Sent: Friday, August 21, 2009 11:07 AM
To: Chris Hopkins
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: [GENERAL] Out of memory on pg_dump

"Chris Hopkins" <chopkins(at)cra(dot)com> writes:
> 2009-08-19 22:35:42 ERROR: out of memory
> 2009-08-19 22:35:42 DETAIL: Failed on request of size 536870912.

> Is there an easy way to give pg_dump more memory?

That isn't pg_dump that's out of memory --- it's a backend-side message.
Unless you've got extremely wide fields in this table, I would bet on
this really being a corrupted-data situation --- that is, there's some
datum in the table whose length word has been corrupted into a very
large value. You can try to isolate and delete the corrupted row(s).

regards, tom lane

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Andrus Moor 2009-08-21 15:54:51 Error inserting data to bytea column in 8.4
Previous Message Sam Mason 2009-08-21 15:22:55 Re: join from array or cursor