From: | Bruce Momjian <bruce(at)momjian(dot)us> |
---|---|
To: | Stephen Frost <sfrost(at)snowman(dot)net> |
Cc: | Dave Page <dpage(at)pgadmin(dot)org>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Speeding up pg_upgrade |
Date: | 2017-12-05 14:29:34 |
Message-ID: | 20171205142934.GD25023@momjian.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Tue, Dec 5, 2017 at 09:23:49AM -0500, Stephen Frost wrote:
> Dave,
>
> * Dave Page (dpage(at)pgadmin(dot)org) wrote:
> > On Tue, Dec 5, 2017 at 11:01 PM, Bruce Momjian <bruce(at)momjian(dot)us> wrote:
> > > As part of PGConf.Asia 2017 in Tokyo, we had an unconference topic about
> > > zero-downtime upgrades. After the usual discussion of using logical
> > > replication, Slony, and perhaps having the server be able to read old
> > > and new system catalogs, we discussed speeding up pg_upgrade.
> > >
> > > There are clusters that take a long time to dump the schema from the old
> > > cluster and recreate it in the new cluster. One idea of speeding up
> > > pg_upgrade would be to allow pg_upgrade to be run in two stages:
> > >
> > > 1. prevent system catalog changes while the old cluster is running, and
> > > dump the old cluster's schema and restore it in the new cluster
> > >
> > > 2. shut down the old cluster and copy/link the data files
> >
> > When we were discussing this, I was thinking that the linking could be done
> > in phase 1 too, as that's potentially slow on a very large schema.
>
> Right, I had that thought too when first reading this, but the problem
> there is that new files can show up due to a relation being extended (at
> least, and perhaps in other cases too..).
Oh, yikes, yes.
> > > My question is whether the schema dump/restore is time-consuming enough
> > > to warrant this optional more complex API, and whether people would
> > > support adding a server setting that prevented all system table changes?
> >
> > I've certainly heard of cases where pg_upgrade takes significant amounts of
> > time to run on very complex databases.
>
> Right, but that doesn't really answer the question as to which part of
> the pg_upgrade process is taking up the time.
>
> In any case, of course, if we're able to move part of what pg_upgrade
> does to be while the old server is online then that takes whatever the
> cost of that is out of the downtime window. The question is if that's a
> 5% improvement in the overall performance of pg_upgrade or a 70% one.
> This will be case-by-case, of course, but if, in the best-case, we only
> get a 5% improvement then this might not be worth the risk.
Yes, and who is going to know if they have a setup where the more
complex API is worth it? pg_upgrade is already complex enough to use.
--
Bruce Momjian <bruce(at)momjian(dot)us> http://momjian.us
EnterpriseDB http://enterprisedb.com
+ As you are, so once was I. As I am, so you will be. +
+ Ancient Roman grave inscription +
From | Date | Subject | |
---|---|---|---|
Next Message | Stephen Frost | 2017-12-05 14:30:53 | Re: Speeding up pg_upgrade |
Previous Message | Stephen Frost | 2017-12-05 14:23:49 | Re: Speeding up pg_upgrade |