From: | "Hugo <Nabble>" <hugo(dot)tech(at)gmail(dot)com> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: pg_dump and thousands of schemas |
Date: | 2012-05-27 04:12:13 |
Message-ID: | 1338091933763-5710183.post@n5.nabble.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers pgsql-performance |
Here is a sample dump that takes a long time to be written by pg_dump:
http://postgresql.1045698.n5.nabble.com/file/n5710183/test.dump.tar.gz
test.dump.tar.gz
(the file above has 2.4Mb, the dump itself has 66Mb)
This database has 2,311 schemas similar to those in my production database.
All schemas are empty, but pg_dump still takes 3 hours to finish it on my
computer. So now you can imagine my production database with more than
20,000 schemas like that. Can you guys take a look and see if the code has
room for improvements? I generated this dump with postgresql 9.1 (which is
what I have on my local computer), but my production database uses
postgresql 9.0. So it would be great if improvements could be delivered to
version 9.0 as well.
Thanks a lot for all the help!
Hugo
--
View this message in context: http://postgresql.1045698.n5.nabble.com/pg-dump-and-thousands-of-schemas-tp5709766p5710183.html
Sent from the PostgreSQL - performance mailing list archive at Nabble.com.
From | Date | Subject | |
---|---|---|---|
Next Message | Magnus Hagander | 2012-05-27 08:48:55 | Re: pg_stat_statements temporary file |
Previous Message | Pavel Stehule | 2012-05-27 03:40:53 | Re: VIP: new format for psql - shell - simple using psql in shell |
From | Date | Subject | |
---|---|---|---|
Next Message | Ivan Voras | 2012-05-27 15:57:55 | Re: Seqscan slowness and stored procedures |
Previous Message | Pavel Stehule | 2012-05-27 03:28:32 | Re: Seqscan slowness and stored procedures |