Re: Dump/Reload pg_statistic to cut time from pg_upgrade?

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Jerry Sievers <gsievers19(at)comcast(dot)net>
Cc: Bruce Momjian <bruce(at)momjian(dot)us>, Kevin Grittner <kgrittn(at)ymail(dot)com>, "pgsql-admin\(at)postgresql(dot)org" <pgsql-admin(at)postgresql(dot)org>
Subject: Re: Dump/Reload pg_statistic to cut time from pg_upgrade?
Date: 2013-07-10 16:46:26
Message-ID: 26325.1373474786@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

Jerry Sievers <gsievers19(at)comcast(dot)net> writes:
> Kevin Grittner <kgrittn(at)ymail(dot)com> writes:
>> Jerry Sievers <gsievers19(at)comcast(dot)net> wrote:
>>> Planning to pg_upgrade some large (3TB) clusters using hard link
>>> method. Run time for the upgrade itself takes around 5 minutes.
>>> Unfortunately the post-upgrade analyze of the entire cluster is going
>>> to take a minimum of 1.5 hours running several threads to analyze all
>>> tables. This was measured in an R&D environment.

At least for some combinations of source and destination server
versions, it seems like it ought to be possible for pg_upgrade to just
move the old cluster's pg_statistic tables over to the new, as though
they were user data. pg_upgrade takes pains to preserve relation OIDs
and attnums, so the key values should be compatible. Except in
releases where we've added physical columns to pg_statistic or made a
non-backward-compatible redefinition of statistics meanings, it seems
like this should Just Work. In cases where it doesn't work, pg_dump
and reload of that table would not work either (even without the
anyarray problem).

regards, tom lane

In response to

Responses

Browse pgsql-admin by date

  From Date Subject
Next Message Wells Oliver 2013-07-10 17:53:50 Upgrading from 9.1 to 9.2 in place, same machine
Previous Message Jerry Sievers 2013-07-10 15:47:33 Re: Dump/Reload pg_statistic to cut time from pg_upgrade?