Clarification on the release notes of postgresql 12 regarding pg_upgrade

From: Marcelo Lacerda <marceloslacerda(at)gmail(dot)com>
To: pgsql-general(at)lists(dot)postgresql(dot)org
Subject: Clarification on the release notes of postgresql 12 regarding pg_upgrade
Date: 2019-10-04 13:00:17
Message-ID: CAPmRTtM5S-uc20MHU2PsMkQDwR0xjULQBuAsMzN1WDBw0aePbg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

There are a few instances where the release notes seem to indicate that the
administrator should use pg_dump to upgrade a database so that improvements
on btree can be available.

Here are they:

1.

>In new btree indexes, the maximum index entry length is reduced by eight
bytes, to improve handling of duplicate entries (Peter Geoghegan)
-

> This means that a REINDEX
<https://www.postgresql.org/docs/12/sql-reindex.html> operation on an index
pg_upgrade'd from a previous release could potentially fail.
-
- 2.
>Improve performance and space utilization of btree indexes with many
duplicates (Peter Geoghegan, Heikki Linnakangas)
>...
>Indexes pg_upgrade'd from previous releases will not have these benefits.

3.
>Allow multi-column btree indexes to be smaller (Peter Geoghegan, Heikki
Linnakangas)
>...
>Indexes pg_upgrade'd from previous releases will not have these benefits.

My questions are:

1. Is this a current limitation of pg_upgrade that will be dealt afterwards?

2. Are we going to see more of such cases were pg_upgrade leaves the
database incompatible with newer features.

3. What's the recommendation for administrators with databases that are too
large to be upgraded with pg_dump?

Responses

Browse pgsql-general by date

  From Date Subject
Next Message greigwise 2019-10-04 13:19:36 Re: BitmapAnd on correlated column?
Previous Message Thomas Kellerer 2019-10-04 10:13:13 Postgres 12: backend crashes when creating non-deterministic collation