From: | senor <frio_cervesa(at)hotmail(dot)com> |
---|---|
To: | "pgsql-general(at)lists(dot)postgresql(dot)org" <pgsql-general(at)lists(dot)postgresql(dot)org> |
Subject: | Re: pg_upgrade --jobs |
Date: | 2019-04-07 00:47:59 |
Message-ID: | BYAPR01MB37015302FC3DE93AFADA2EB7F7530@BYAPR01MB3701.prod.exchangelabs.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Thanks Tom for the explanation. I assumed it was my ignorance of how the schema was handled that was making this look like a problem that had already been solved and I was missing something.
I fully expected the "You're Doing It Wrong" part. That is out of my control but not beyond my influence.
I suspect I know the answer to this but have to ask. Using a simplified example where there are 100K sets of 4 tables, each representing the output of a single job, are there any shortcuts to upgrading that would circumvent exporting the entire schema? I'm sure a different DB design would be better but that's not what I'm working with.
Thanks
________________________________________
From: Ron <ronljohnsonjr(at)gmail(dot)com>
Sent: Saturday, April 6, 2019 4:57 PM
To: pgsql-general(at)lists(dot)postgresql(dot)org
Subject: Re: pg_upgrade --jobs
On 4/6/19 6:50 PM, Tom Lane wrote:
senor <frio_cervesa(at)hotmail(dot)com><mailto:frio_cervesa(at)hotmail(dot)com> writes:
[snip]
The --link option to pg_upgrade would be so much more useful if it
weren't still bound to serially dumping the schemas of half a million
tables.
To be perfectly blunt, if you've got a database with half a million
tables, You're Doing It Wrong.
Heavy (really heavy) partitioning?
--
Angular momentum makes the world go 'round.
From | Date | Subject | |
---|---|---|---|
Next Message | Jess Wren | 2019-04-07 04:06:26 | How to use full-text search URL parser to filter query results by domain name? |
Previous Message | Ron | 2019-04-06 23:57:31 | Re: pg_upgrade --jobs |