Re: Upgrade from PG12 to PG

From: Scott Ribe <scott_ribe(at)elevated-dev(dot)com>
To: Jef Mortelle <jefmortelle(at)gmail(dot)com>
Cc: pgsql-admin(at)lists(dot)postgresql(dot)org
Subject: Re: Upgrade from PG12 to PG
Date: 2023-07-20 14:51:35
Message-ID: B6D3FD80-5794-47B8-9074-E10C4945951B@elevated-dev.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

> On Jul 20, 2023, at 7:46 AM, Jef Mortelle <jefmortelle(at)gmail(dot)com> wrote:
>
> So: not possible to have very little downtime if you have a database with al lot rows containing text as datatype, as pg_upgrade needs 12hr for 24 milj rows in pg_largeobject.

We need to get terminology straight, as at the moment your posts are very confusing. In PostgreSQL large objects and text are not the same. Text is basically varchar without a specified length limit. Large object is a blob (but not what SQL calls a BLOB)--it is kind of like a file stored outside the normal table mechanism, and provides facilities for partial reads, etc: https://www.postgresql.org/docs/15/largeobjects.html. There are a number of ways to wind up with references to large objects all deleted, but the orphaned large objects still in the database.

First thing you should do: run lovacuum -n to find out if you have orphaned large objects. If so, start cleaning those up, then see how long pg_upgrade takes.

Second, what's your hardware? I really don't see dump & restore of a 1TB database taking 6 hours.

> Alsready tried to use --link and --jobs, but you cannot ommit the "select lo_unlink ...." for every rows containing datatype text in your database that the pg_* program creates in the export/dump file.

Terminology again, or are you conflating two different issues? pg_upgrade --link does not create a dump file.

In response to

Responses

Browse pgsql-admin by date

  From Date Subject
Next Message sbob 2023-07-20 14:52:53 Managing LDAP User permissions
Previous Message Jef Mortelle 2023-07-20 14:49:05 Re: Upgrade from PG12 to PG