Re: PostgreSQL 9.2 - pg_dump out of memory when backuping a database with 300000000 large objects

From: Sergey Klochkov <klochkov(at)iqbuzz(dot)ru>
To: pgsql-admin(at)postgresql(dot)org
Subject: Re: PostgreSQL 9.2 - pg_dump out of memory when backuping a database with 300000000 large objects
Date: 2013-10-01 10:46:16
Message-ID: 524AA7F8.20904@iqbuzz.ru
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin pgsql-sql

Stack trace:

Thread 1 (Thread 0x7ff72c4c97c0 (LWP 13086)):
#0 removeHeapElement (objs=0x1a0c90630, numObjs=<value optimized out>,
preBoundaryId=<value optimized out>, postBoundaryId=<value optimized
out>) at pg_dump_sort.c:502
#1 TopoSort (objs=0x1a0c90630, numObjs=<value optimized out>,
preBoundaryId=<value optimized out>, postBoundaryId=<value optimized
out>) at pg_dump_sort.c:415
#2 sortDumpableObjects (objs=0x1a0c90630, numObjs=<value optimized
out>, preBoundaryId=<value optimized out>, postBoundaryId=<value
optimized out>) at pg_dump_sort.c:280
#3 0x000000000041acd1 in main (argc=<value optimized out>, argv=<value
optimized out>) at pg_dump.c:747

On 01.10.2013 14:23, Sergey Klochkov wrote:
> I've upgraded to 9.2.4. The problem still persists. It consumed 10 Gb of
> RAM in 5 minutes and still grows. The dump file did not appear.
>
> On 01.10.2013 14:04, Jov wrote:
>> Try update to the latest release,I see there is a bug fix about pg_dump
>> out of memroy in 9.2.2,from the release note
>> http://www.postgresql.org/docs/devel/static/release-9-2-2.html:
>>
>> *
>>
>> Work around unportable behavior of malloc(0) and realloc(NULL,
>> 0) (Tom Lane)
>>
>> On platforms where these calls return NULL, some code mistakenly
>> thought that meant out-of-memory. This is known to have broken
>> pg_dump for databases containing no user-defined aggregates. There
>> might be other cases as well.
>>
>>
>> Jov
>> blog: http:amutu.com/blog <http://amutu.com/blog>
>>
>>
>> 2013/10/1 Sergey Klochkov <klochkov(at)iqbuzz(dot)ru
>> <mailto:klochkov(at)iqbuzz(dot)ru>>
>>
>> Hello All,
>>
>> While trying to backup a database of relatively modest size (160 Gb)
>> I ran into the following issue:
>>
>> When I run
>> $ pg_dump -f /path/to/mydb.dmp -C -Z 9 mydb
>>
>> File /path/to/mydb.dmp does not appear (yes, I've checked
>> permissions and so on). pg_dump just begins to consume memory until
>> it eats up all avaliable RAM (96 Gb total on server, >64 Gb
>> available) and is killed by the oom killer.
>>
>> According to pg_stat_activity, pg_dump runs the following query
>>
>> SELECT oid, (SELECT rolname FROM pg_catalog.pg_roles WHERE oid =
>> lomowner) AS rolname, lomacl FROM pg_largeobject_metadata
>>
>> until it is killed.
>>
>> strace shows that pg_dump is constantly reading a large amount of
>> data from a UNIX socket. I suspect that it is the result of the
>> above query.
>>
>> There are >300000000 large objects in the database. Please don't ask
>> me why.
>>
>> I tried googling on this, and found mentions of pg_dump being killed
>> by oom killer, but I failed to find anything related to the huge
>> large objects number.
>>
>> Is there any method of working around this issue?
>>
>> Thanks in advance.
>>
>> OS: CentOS 6
>> PostgreSQL version: 9.2.1
>> 96 Gb RAM
>>
>> PostgreSQL configuration:
>>
>> listen_addresses = '*' # what IP address(es) to listen on;
>> port = 5432 # (change requires restart)
>> max_connections = 500 # (change requires restart)
>> shared_buffers = 16GB # min 128kB
>> temp_buffers = 64MB # min 800kB
>> work_mem = 512MB # min 64kB
>> maintenance_work_mem = 30000MB # min 1MB
>> checkpoint_segments = 70 # in logfile segments, min
>> 1, 16MB each
>> effective_cache_size = 50000MB
>> logging_collector = on # Enable capturing of stderr
>> and csvlog
>> log_directory = 'pg_log' # directory where log files
>> are written,
>> log_filename = 'postgresql-%a.log' # log file name pattern,
>> log_truncate_on_rotation = on # If on, an existing log
>> file of the
>> log_rotation_age = 1d # Automatic rotation of
>> logfiles will
>> log_rotation_size = 0 # Automatic rotation of
>> logfiles will
>> log_min_duration_statement = 5000
>> log_line_prefix = '%t' # special values:
>> autovacuum = on # Enable autovacuum
>> subprocess? 'on'
>> log_autovacuum_min_duration = 0 # -1 disables, 0 logs all
>> actions and
>> autovacuum_max_workers = 5 # max number of autovacuum
>> subprocesses
>> autovacuum_naptime = 5s # time between autovacuum
>> runs
>> autovacuum_vacuum_threshold = 25 # min number of row updates
>> before
>> autovacuum_vacuum_scale_factor = 0.1 # fraction of table size
>> before vacuum
>> autovacuum_vacuum_cost_delay = 7ms # default vacuum cost
>> delay for
>> autovacuum_vacuum_cost_limit = 1500 # default vacuum cost
>> limit for
>> datestyle = 'iso, dmy'
>> lc_monetary = 'ru_RU.UTF-8' # locale for
>> monetary formatting
>> lc_numeric = 'ru_RU.UTF-8' # locale for number
>> formatting
>> lc_time = 'ru_RU.UTF-8' # locale for time
>> formatting
>> default_text_search_config = 'pg_catalog.russian'
>>
>> --
>> Sergey Klochkov
>> klochkov(at)iqbuzz(dot)ru <mailto:klochkov(at)iqbuzz(dot)ru>
>>
>>
>> --
>> Sent via pgsql-admin mailing list (pgsql-admin(at)postgresql(dot)org
>> <mailto:pgsql-admin(at)postgresql(dot)org>)
>> To make changes to your subscription:
>> http://www.postgresql.org/__mailpref/pgsql-admin
>> <http://www.postgresql.org/mailpref/pgsql-admin>
>>
>>
>

--
Sergey Klochkov
klochkov(at)iqbuzz(dot)ru

In response to

Browse pgsql-admin by date

  From Date Subject
Next Message Giuseppe Broccolo 2013-10-01 11:01:39 Re: PostgreSQL 9.2 - pg_dump out of memory when backuping a database with 300000000 large objects
Previous Message Sergey Klochkov 2013-10-01 10:23:17 Re: PostgreSQL 9.2 - pg_dump out of memory when backuping a database with 300000000 large objects

Browse pgsql-sql by date

  From Date Subject
Next Message Giuseppe Broccolo 2013-10-01 11:01:39 Re: PostgreSQL 9.2 - pg_dump out of memory when backuping a database with 300000000 large objects
Previous Message Sergey Klochkov 2013-10-01 10:23:17 Re: PostgreSQL 9.2 - pg_dump out of memory when backuping a database with 300000000 large objects