From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com> |
Cc: | Jaime Soler <jaime(dot)soler(at)gmail(dot)com>, Amit Khandekar <amitdkhan(dot)pg(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Hash join in SELECT target list expression keeps consuming memory |
Date: | 2018-03-21 15:51:14 |
Message-ID: | 18836.1521647474@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com> writes:
> On 03/21/2018 02:18 PM, Jaime Soler wrote:
>> We still get out of memory error during pg_dump execution
>> pg_dump: reading large objects
>> out of memory
> Hmmmm ... that likely happens because of this for loop copying a lot of
> data:
> https://github.com/postgres/postgres/blob/master/src/bin/pg_dump/pg_dump.c#L3258
The long and the short of it is that too many large objects *will*
choke pg_dump; this has been obvious since we decided to let it treat
large objects as heavyweight objects. See eg
https://www.postgresql.org/message-id/29613.1476969807@sss.pgh.pa.us
I don't think there's any simple fix available. We discussed some
possible solutions in
https://www.postgresql.org/message-id/flat/5539483B.3040401%40commandprompt.com
but none of them looked easy. The best short-term answer is "run
pg_dump in a less memory-constrained system".
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Tomas Vondra | 2018-03-21 16:47:23 | Re: [PATCH] btree_gin, add support for uuid, bool, name, bpchar and anyrange types |
Previous Message | Craig Ringer | 2018-03-21 15:38:03 | Re: handling of heap rewrites in logical decoding |