Re: pg_dump out of memory for large table with LOB

From: Ron <ronljohnsonjr(at)gmail(dot)com>
To: pgsql-general(at)lists(dot)postgresql(dot)org
Subject: Re: pg_dump out of memory for large table with LOB
Date: 2018-11-10 22:46:20
Message-ID: 4109ea0c-fedb-71a3-46c2-2325083c58d5@gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On 11/09/2018 05:49 PM, Jean-Marc Lessard wrote:
>
> I am running PostgreSQL 9.6.5 on x86_64-pc-mingw64, compiled by gcc.exe
> (Rev5, Built by MSYS2 project) 4.9.2, 64-bit
>
> on win2012 with 12Gb RAM
>
> The dumped table is 0.5TB, 17 million rows and LOB uses about 99% of the
> space.
>
> The pg_dump consumes the entire system memory and swap, then terminates
> with out of memory error
>
> Is it a bug or normal behavior?
>
> If I do not include LOB in the dump, it works fine.
>
> Here is the dump output:
>
> C:\Users\Administrator> pg_dump -h localhost -Fc -a -b -t signatures -v >
> d:\postgresql\sig.dmp
>
> pg_dump: last built-in OID is 16383
>
> ...
>
> pg_dump: reading row security enabled for table "ibisl1.signatures"
>
> pg_dump: reading policies for table "ibisl1.signatures"
>
> pg_dump: reading large objects
>
> pg_dump: reading dependency data
>
> pg_dump: saving encoding = UTF8
>
> pg_dump: saving standard_conforming_strings = on
>
> out of memory
>

This looks similar to the recent thread "Trouble Upgrading Postgres".
https://www.postgresql.org/message-id/flat/CAFw6%3DU2oz9rTF0qa0LFMg91bu%3Dhdisfu2-xXU1%3D%3DD7yBif%2B2uw%40mail.gmail.com

Specifically, message ce239c9c-68f2-43e6-a6b6-81c66d0f46e5(at)manitou-mail(dot)org
<https://www.postgresql.org/message-id/ce239c9c-68f2-43e6-a6b6-81c66d0f46e5%40manitou-mail.org>

"The hex expansion performed by COPY must allocate twice that size,
plus the rest of the row, and if that resulting size is above 1GB, it
will error out with the message you mentioned upthread:
ERROR: invalid memory alloc request size <some value over 1 billion>.
So there's no way it can deal with the contents over 500MB, and the
ones just under that limit may also be problematic."

And message 89b5b622-4c79-4c95-9ad4-b16d0d0daf9b(at)manitou-mail(dot)org

"It's undoubtedly very annoying that a database can end up with

non-pg_dump'able contents, but it's not an easy problem to solve. Some time
ago, work was done to extend the 1GB limit but eventually it got scratched.
The thread in [1] discusses many details of the problem and why the proposed
solution were mostly a band aid. Basically, the specs of COPY and other
internal aspects of Postgres are from the 32-bit era when putting the size
of an entire CDROM in a single row/column was not anticipated as a valid use
case. It's still a narrow use case today and applications that need to store
big pieces of data like that should slice them in chunks, a bit like in
pg_largeobject, except in much larger chunks, like 1MB.

[1] pg_dump / copy bugs with "big lines" ?
https://www.postgresql.org/message-id/1836813.YmyOrS99PX%40ronan.dunklau.fr
<https://www.postgresql.org/message-id/1836813.YmyOrS99PX@ronan.dunklau.fr>

 "

> *Jean-Marc Lessard*
> Administrateur de base de données / Database Administrator
> Ultra Electronics Forensic Technology Inc.
> *T* +1 514 489 4247 x4164
> www.ultra-forensictechnology.com <http://www.ultra-forensictechnology.com>
>

--
Angular momentum makes the world go 'round.

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Adrian Klaver 2018-11-10 23:31:31 Re: pg_dump out of memory for large table with LOB
Previous Message Jean-Marc Lessard 2018-11-09 23:49:24 pg_dump out of memory for large table with LOB