Re: pg_dump with 1100 schemas being a bit slow

From: "Joshua D(dot) Drake" <jd(at)commandprompt(dot)com>
To: Loic d'Anterroches <diaeresis(at)gmail(dot)com>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: pg_dump with 1100 schemas being a bit slow
Date: 2009-10-07 16:29:21
Message-ID: 1254932961.11374.24.camel@jd-desktop.unknown.charter.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Wed, 2009-10-07 at 12:51 +0200, Loic d'Anterroches wrote:
> Hello,

> My problem is that the dump increased steadily with the number of
> schemas (now about 20s from about 12s with 850 schemas) and pg_dump is
> now ballooning at 120MB of memory usage when running the dump.
>

And it will continue to. The number of locks that are needing to be
acquired will consistently increase the amount of time it takes to
backup the database as you add schemas and objects. This applies to
whether or not you are running a single dump per schema or a global dump
with -Fc.

I agree with the other participants in this thread that it makes more
sense for you to use -Fc but your speed isn't going to change all that
much overall.

Joshua D. Drake

--
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 503.667.4564
Consulting, Training, Support, Custom Development, Engineering
If the world pushes look it in the eye and GRR. Then push back harder. - Salamander

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Dave Huber 2009-10-07 16:49:31 Re: automated row deletion
Previous Message Loic d'Anterroches 2009-10-07 16:25:00 Re: pg_dump with 1100 schemas being a bit slow