Re: PostgreSQL 9.2 - pg_dump out of memory when backuping a database with 300000000 large objects

From: Alejandro Brust <alejandrob(at)pasteleros(dot)org(dot)ar>
To: pgsql-admin(at)postgresql(dot)org
Subject: Re: PostgreSQL 9.2 - pg_dump out of memory when backuping a database with 300000000 large objects
Date: 2013-10-01 19:30:15
Message-ID: 524B22C7.2010300@pasteleros.org.ar
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin pgsql-sql

Did U perform any vacuumdb / reindexdb before the Pg_dump?

El 01/10/2013 09:49, Magnus Hagander escribió:
> On Tue, Oct 1, 2013 at 11:07 AM, Sergey Klochkov <klochkov(at)iqbuzz(dot)ru> wrote:
>> Hello All,
>>
>> While trying to backup a database of relatively modest size (160 Gb) I ran
>> into the following issue:
>>
>> When I run
>> $ pg_dump -f /path/to/mydb.dmp -C -Z 9 mydb
>>
>> File /path/to/mydb.dmp does not appear (yes, I've checked permissions and so
>> on). pg_dump just begins to consume memory until it eats up all avaliable
>> RAM (96 Gb total on server, >64 Gb available) and is killed by the oom
>> killer.
>>
>> According to pg_stat_activity, pg_dump runs the following query
>>
>> SELECT oid, (SELECT rolname FROM pg_catalog.pg_roles WHERE oid = lomowner)
>> AS rolname, lomacl FROM pg_largeobject_metadata
>>
>> until it is killed.
>>
>> strace shows that pg_dump is constantly reading a large amount of data from
>> a UNIX socket. I suspect that it is the result of the above query.
>>
>> There are >300000000 large objects in the database. Please don't ask me why.
>>
>> I tried googling on this, and found mentions of pg_dump being killed by oom
>> killer, but I failed to find anything related to the huge large objects
>> number.
>>
>> Is there any method of working around this issue?
> I think this problem comes from the fact that pg_dump treats each
> large object as it's own item. See getBlobs() which allocates a
> BlobInfo struct for each LO (and a DumpableObject if there are any,
> but that's just one).
>
> I assume the query (from that file):
> SELECT oid, lomacl FROM pg_largeobject_metadata
>
> returns 300000000 rows, which are then looped over?
>
> I ran into a similar issue a few years ago with a client using a
> 32-bit version of pg_dump, and got it worked around by moving to
> 64-bit. Did unfortunately not have time to look at the underlying
> issue.
>
>

In response to

Responses

Browse pgsql-admin by date

  From Date Subject
Next Message Bhanu Murthy 2013-10-01 23:41:56 DB link from postgres to Oracle; how to query Dbname.tablename?
Previous Message Jorge Torralba 2013-10-01 18:18:40 pg_archivecleanup not cleaning up ?

Browse pgsql-sql by date

  From Date Subject
Next Message Bhanu Murthy 2013-10-01 23:41:56 DB link from postgres to Oracle; how to query Dbname.tablename?
Previous Message Magnus Hagander 2013-10-01 12:49:33 Re: PostgreSQL 9.2 - pg_dump out of memory when backuping a database with 300000000 large objects