From: | Mike Williams <mike(dot)williams(at)comodo(dot)com> |
---|---|
To: | pgsql-admin(at)postgresql(dot)org |
Subject: | pg_restore >10million large objects |
Date: | 2013-12-23 15:19:03 |
Message-ID: | 8431385.HKVqUUWTfq@mahdell |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
Hi all,
There have been some questions about pg_dump and huge numbers of large objects
recently. I have a query about the opposite.
How can restoring a database with a lot of large objects run faster?
My database has a relatively piddling 13 million large objects, so dumping it
isn't a problem.
Restoring it is a problem though.
This is for a migration from 8.4 to 9.3. The dump is taken using pg_dump from
9.3.
I've run a test on a significantly smaller test system.
~4GB overall, and 1.1 million large objects. It took 2 hours, give or take.
The server it's on isn't especially fast though.
It seems that each "SELECT pg_catalog.lo_create('xxxxx');" is run
independently and sequentially, despite having --jobs=8 specified.
Is there any magic incantation, or animal sacrifice, I can make to get those
lo_create() calls run in parallel?
Our 9.3 production servers have 12 cores (plus HT) and SSDs, so can do many
queries at the same time.
Thanks
--
Mike Williams
From | Date | Subject | |
---|---|---|---|
Next Message | bricklen | 2013-12-23 15:54:48 | Re: pg_restore >10million large objects |
Previous Message | Tom Lane | 2013-12-21 00:11:01 | Re: Timezone from PG8 to PG9 |