From: | Adrian Klaver <adrian(dot)klaver(at)aklaver(dot)com> |
---|---|
To: | Julie Nishimura <juliezain(at)hotmail(dot)com>, "pgsql-general(at)lists(dot)postgresql(dot)org" <pgsql-general(at)lists(dot)postgresql(dot)org> |
Subject: | Re: migration of 100+ tables |
Date: | 2019-03-11 01:28:48 |
Message-ID: | 06e27bdf-e046-32f8-af1c-55a60c21a88b@aklaver.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On 3/10/19 5:53 PM, Julie Nishimura wrote:
> Hello friends, I will need to migrate 500+ tables from one server (8.3)
> to another (9.3). I cannot dump and load the entire database due to
> storage limitations (because the source is > 20 TB, and the target is
> about 1.5 TB).
>
> I was thinking about using pg_dump with customized -t flag, then use
> restore. The table names will be in the list, or I could dump their
> names in a table. What would be your suggestions on how to do it more
> efficiently?
The sizes you mention above, are they for the uncompressed raw data?
Are the tables all in one schema or multiple?
Where I am going with this is pg_dump -Fc --schema.
See:
https://www.postgresql.org/docs/10/app-pgrestore.html
The pg_restore -l to get a TOC(Table of Contents).
Comment out the items you do not want in the TOC.
Then pg_restore --use-list.
See:
https://www.postgresql.org/docs/10/app-pgrestore.html
>
> Thank you for your ideas, this is great to have you around, guys!
>
>
--
Adrian Klaver
adrian(dot)klaver(at)aklaver(dot)com
From | Date | Subject | |
---|---|---|---|
Next Message | Julie Nishimura | 2019-03-11 04:07:05 | Re: migration of 100+ tables |
Previous Message | Julie Nishimura | 2019-03-11 00:53:08 | migration of 100+ tables |