From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | senor <frio_cervesa(at)hotmail(dot)com> |
Cc: | "pgsql-general(at)lists(dot)postgresql(dot)org" <pgsql-general(at)lists(dot)postgresql(dot)org> |
Subject: | Re: pg_upgrade --jobs |
Date: | 2019-04-06 23:50:58 |
Message-ID: | 20138.1554594658@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
senor <frio_cervesa(at)hotmail(dot)com> writes:
> Is the limitation simply the state of development to date or is there
> something about dumping the schemas that conflicts with paralleling?
At minimum, it'd take a complete redesign of pg_dump's output format,
and I'm not even very sure what such a redesign would look like. All
the schema information goes into a single file that has to be written
serially. Trying to make it be one file per table definition wouldn't
really fix much: somewhere there has to be a "table of contents", plus
where are you going to put the dependency info that shows what ordering
is required for restore?
> The --link option to pg_upgrade would be so much more useful if it
> weren't still bound to serially dumping the schemas of half a million
> tables.
To be perfectly blunt, if you've got a database with half a million
tables, You're Doing It Wrong.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Ron | 2019-04-06 23:57:31 | Re: pg_upgrade --jobs |
Previous Message | senor | 2019-04-06 23:38:26 | Re: pg_upgrade --jobs |