pg_upgrade --jobs

From: senor <frio_cervesa(at)hotmail(dot)com>
To: "pgsql-general(at)lists(dot)postgresql(dot)org" <pgsql-general(at)lists(dot)postgresql(dot)org>
Subject: pg_upgrade --jobs
Date: 2019-04-06 18:44:31
Message-ID: BYAPR01MB370142138620DE0D3DD6120BF7520@BYAPR01MB3701.prod.exchangelabs.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

The pg_upgrade --jobs option is not passed as an argument when it calls pg_dump. I haven't found anything in docs or forums mentioning a reason for not supporting under certain circumstances other than possibly for pre-9.2. The pg_upgrade docs page states that it allows multiple CPUs to be used for dump and reload of schemas. Some databases I'm upgrading have 500,000+ tables and running with a single process is greatly increasing the upgrade time.

I am also using the --link option.
I have tried "--jobs 20", "--jobs=20", placing this option first and last and many other variations.
I am upgrading 9.2.4 to 9.6.12 on CentOS 6.
Varying hardware but all with 32+ CPU cores.

su - postgres -c "/usr/pgsql-9.6/bin/pg_upgrade --jobs=20 --link \
--old-bindir=/usr/pgsql-9.2/bin/ \
--new-bindir=/usr/pgsql-9.6/bin/ \
--old-datadir=/var/lib/pgsql/9.2/data/ \
--new-datadir=/var/lib/pgsql/9.6/data/"

I feel like there's a simple reason I've missed but this seems pretty straight forward.
A secondary plan would be to find instructions for doing the same as "pg_upgrade --link" manually so I can run "pg_dump --jobs 20".
Any assist is appreciated.
Thanks,
S. Cervesa

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Adrian Klaver 2019-04-06 20:52:30 Re: pg_upgrade --jobs
Previous Message Pavel Suderevsky 2019-04-06 14:31:08 Re: 9.6.11- could not truncate directory "pg_serial": apparent wraparound