Determining Indexes to Rebuild on new libc

From: Don Seiler <don(at)seiler(dot)us>
To: pgsql-admin <pgsql-admin(at)postgresql(dot)org>
Subject: Determining Indexes to Rebuild on new libc
Date: 2022-08-04 14:29:20
Message-ID: CAHJZqBBWUAzKKkTaOf_QDYtmuMavntg4ba-_ScNJRKkuxtRsyw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

Good morning,

As we're staring down the eventuality of having to migrate to a newer OS
(currently on Ubuntu 18.04 LTS), we're preparing for the collation change
madness that will ensue. We're looking at logical replication but there is
a lot to unpack there as well given the number of databases and the massive
size of a few of them. I had been inclined to bite the bullet and do
logical replication (or dump/restore on smaller DBs) but the timeframe for
the project is being pushed up so I'm looking for shortcuts where possible
(obviously without risking DB integrity). This would also give me the
opportunity for other modifications like enabling data checksums on the new
DBs that I had been sorely wanting for years now.

One question that gets asked is if we could do physical replication, cut
over, and then only rebuild indexes that "need it" in order to minimize the
subsequent downtime. i.e. can we determine which indexes will actually have
a potential problem. For example, a lot of indexes are on text/varchar
datatype fields that hold UUID data and nothing more (basic alphanumeric
characters and hyphens embedded). If we can be certain that these fields
truly only hold this type of data, could we skip rebuilding them after the
cutover to a newer OS (eg Ubuntu 22.04 LTS with the newer libc collation)?

Thanks,
Don.

--
Don Seiler
www.seiler.us

Responses

Browse pgsql-admin by date

  From Date Subject
Next Message Ron 2022-08-04 14:55:50 Re: Determining Indexes to Rebuild on new libc
Previous Message Pepe TD Vo 2022-08-04 13:56:52 Re: Slow Scripts - Create Script