From: | Justin Pryzby <pryzby(at)telsasoft(dot)com> |
---|---|
To: | Bruce Momjian <bruce(at)momjian(dot)us> |
Cc: | PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Speeding up pg_upgrade |
Date: | 2017-12-07 16:28:23 |
Message-ID: | 20171207162823.GB23286@telsasoft.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Tue, Dec 05, 2017 at 09:01:35AM -0500, Bruce Momjian wrote:
> As part of PGConf.Asia 2017 in Tokyo, we had an unconference topic about
> zero-downtime upgrades. ... we discussed speeding up pg_upgrade.
>
> There are clusters that take a long time to dump the schema from the old
> cluster
Maybe it isn't representative of a typical case, but I can offer a data point:
For us, we have ~40 customers with DBs ranging in size from <100GB to ~25TB
(for which ~90% is on a ZFS tablespace with compression). We have what's
traditionally considered to be an excessive number of child tables, which works
okay since planning time is unimportant to us for the report queries which hit
them. Some of the tables are wide (historically up to 1600 columns). Some of
those have default values on nearly every column, and pg_attrdef was large
(>500MB), causing pg_dump --section pre-data to be slow (10+ minutes). Since
something similar is run by pg_upgrade, I worked around the issue for now by
dropping defaults on the historic children in advance of upgrades (at some
point I'll figure out what I have to do to allow DROPing DEFAULTs). It's not
the first time we've seen an issue with larger number of children*columns.
Our slowest pg-upgrade was ~40min, caused by column defaults in a case where I
failed to re-DROP DEFAULTs after our first scheduled upgrade date was pushed
back by over a month. Most of the rest were completed in less than 15min.
Justin
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2017-12-07 16:42:08 | Re: Speeding up pg_upgrade |
Previous Message | Robert Haas | 2017-12-07 16:17:55 | Re: pgsql: When VACUUM or ANALYZE skips a concurrently dropped table, log i |