From: | bricklen <bricklen(at)gmail(dot)com> |
---|---|
To: | Mike Williams <mike(dot)williams(at)comodo(dot)com> |
Cc: | "pgsql-admin(at)postgresql(dot)org" <pgsql-admin(at)postgresql(dot)org> |
Subject: | Re: pg_restore >10million large objects |
Date: | 2013-12-23 15:54:48 |
Message-ID: | CAGrpgQ_oWuPdGgM1GjW42eW7AM10RUzpAWf6kqZj0xc4QZoU4w@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
On Mon, Dec 23, 2013 at 7:19 AM, Mike Williams <mike(dot)williams(at)comodo(dot)com>wrote:
>
> How can restoring a database with a lot of large objects run faster?
>
> It seems that each "SELECT pg_catalog.lo_create('xxxxx');" is run
> independently and sequentially, despite having --jobs=8 specified.
>
>
I don't have an answer for why the restore seems to be serialized, but have
you considered creating your pg_dump (-Fc) but exclude all the lobs, then
dump or COPY the large objects out separately which you can them import
with a manually-specified number of processes? By "manually specified", I
mean execute a number of COPY FROM commands using separate threads.
From | Date | Subject | |
---|---|---|---|
Next Message | olimaz | 2013-12-23 17:35:03 | Re: HOT Standby - slave does not appear to be removing wal files |
Previous Message | Mike Williams | 2013-12-23 15:19:03 | pg_restore >10million large objects |