pg_dump resulting in excessive memory use by postmaster process

From: "Ben" <ben(dot)fyvie(at)champsoftware(dot)com>
To: "'PG-General Mailing List'" <pgsql-general(at)postgresql(dot)org>
Subject: pg_dump resulting in excessive memory use by postmaster process
Date: 2013-10-21 15:11:43
Message-ID: 00ed01cece6f$da9a4c80$8fcee580$@champsoftware.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

We've been using pg_dump and pg_restore for many years now and it has always
worked well for us. However, we are currently undertaking a major db
architecture to partition our tenant data into separate postgres schemas
instead of storing all data in the public schema. When attempting to perform
a pg_dump with the --schema-only option using our new db architecture it
causes the postmaster process to consume massive amounts of memory until it
is killed off (6GB+ used by a single postmaster process). Here are the
details:

PG version: 9.2.5

Total number of postgres schemas: 248

Total number of relations across all schemas: 53,154

If I perform a --schema-only dump on a DB where there are only 11 schemas it
succeeds with a dump file that is only 7.5mb in size. All of our tenant
schemas have the same exact relations so things should scale linearly when
more tenant schemas exist.

I should also mention that when performing these dumps there is absolutely
no other DB activity occurring. Do you have any ideas why the excessive
memory growth on the postmaster service is occurring when doing a pg_dump?

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Stephen Frost 2013-10-21 15:31:32 Re: Preserving the source code of views
Previous Message Tom Lane 2013-10-21 14:47:45 Re: pg_dump doesn't restore on newer database due to schema issues (bug or limitation?)