From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | o(dot)blomqvist(at)secomintl(dot)com (Otto Blomqvist) |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Speed of pg_dump -l -s (List Schema) Variations |
Date: | 2004-07-01 03:30:55 |
Message-ID: | 21512.1088652655@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
o(dot)blomqvist(at)secomintl(dot)com (Otto Blomqvist) writes:
> I have a small database (10MB gz dump). When I do a pg_dump -l -s (to
> list the schema) of the original database it takes below 1 second. But
> when I do dump of a copy of the database (using a full restore into a
> new DB) it takes like 10-15 seconds to do the schema list (pg_dump -l
> -s). I need to compare the schemes of about 20 tables and this takes a
> while... Anyone have any ideas ? I can't figure out why the newly
> created copy would be so much slower.
The first thought that comes to mind is that the new database needs to
be VACUUM ANALYZEd. pg_dump does some fairly complicated queries
against the system catalogs, and it's not surprising that you might see
bad plans for those queries if the statistics aren't up-to-date.
If VACUUM ANALYZE doesn't help, I'd be interested to look more closely.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Bruno Wolff III | 2004-07-01 03:32:26 | Re: backups |
Previous Message | mike g | 2004-07-01 03:30:29 | Re: PostgreSQL 7.4.3 on a windows XP Pro environment |