Re: pg_dump with 1100 schemas being a bit slow

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: "Loic d'Anterroches" <diaeresis(at)gmail(dot)com>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: pg_dump with 1100 schemas being a bit slow
Date: 2009-10-07 14:23:48
Message-ID: 9020.1254925428@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

"Loic d'Anterroches" <diaeresis(at)gmail(dot)com> writes:
> Each night I am running:
> pg_dump --blobs --schema=%s --no-acl -U postgres indefero | gzip >
> /path/to/backups/%s/%s-%s.sql.gz
> this for each installation, so 1100 times. Substitution strings are to
> timestamp and get the right schema.

This seems like a pretty dumb way to go at it. Why don't you just do
one -Fc dump for the whole database? If you ever actually need to
restore a single schema, there's a pg_restore switch for that.

> I think that pg_dump, when looking at the objects to dump, also it is
> limited to a given schema, is scanning the complete database in one
> those calls:

Yes, it has to examine all database objects in order to trace
dependencies properly.

> Is there an option: "I know what I am doing, do not look outside of
> the schema" available which can help in my case?

No.

regards, tom lane

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Alvaro Herrera 2009-10-07 14:24:34 Re: Need help in spi_prepare errors
Previous Message Tom Lane 2009-10-07 14:16:13 Re: Need help in spi_prepare errors