Re: Upgrade from PG12 to PG

From: Jef Mortelle <jefmortelle(at)gmail(dot)com>
To: Scott Ribe <scott_ribe(at)elevated-dev(dot)com>
Cc: pgsql-admin(at)lists(dot)postgresql(dot)org
Subject: Re: Upgrade from PG12 to PG
Date: 2023-07-20 17:05:54
Message-ID: 1cb1652e-286a-077e-950d-516970ecf499@gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

running /usr/lib/postgresql15/bin/pg_upgrade -v -p 5431 -P 5432 -k

gives you in the output :

...
"/usr/lib/postgresql15/bin/pg_dump" --host /pg/PG15 --port 5431
--username postgres --schema-only --quote-all-identifiers
--binary-upgrade --format=custom --verbose
--file="/pg/PG15/system/pg_upgrade_output.d/20230720T184333.610/dump/pg_upgrade_dump_16385.custom"
'dbname=dbname' >>
"/pg/PG15/system/pg_upgrade_output.d/20230720T184333.610/log/pg_upgrade_dump_16385.log"
2>&1
...

so, yes pg_ugrade start a pg_dump session,

even a pg_restore at the end, runs about 7 hours

"/usr/lib/postgresql15/bin/pg_restore" --host /pg/PG15 --port 5432
--username postgres --create --exit-on-error --verbose --dbname
template1
"/pg/PG15/system/pg_upgrade_output.d/20230720T184333.610/dump/pg_upgrade_dump_16385.custom"
>>
"/pg/PG15/system/pg_upgrade_output.d/20230720T184333.610/log/pg_upgrade_dump_16385.log"
2>&1

with a lot of lines in the logfiles  like:  pg_restore: executing BLOB
11108809

Server is a VM server, my VM has 64GB SuseSLES  attached to a SAN with
SSD disk (Hp3Par)

2)vacuumlo

lovacum -n =>  I have vacuumlo, running this:

vacuumlo  dbname  -v

postgres(at)dbfluxd02:/pg/data> vacuumlo  dflux -v
Connected to database "dflux"
Checking message_oid in fluxnaf.flux_message_content
Checking payload_oid in fluxnaf.bridge_payload_content
Successfully removed 0 large objects from database "dflux".

and again same issues

On 20/07/2023 16:51, Scott Ribe wrote:
>> On Jul 20, 2023, at 7:46 AM, Jef Mortelle <jefmortelle(at)gmail(dot)com> wrote:
>>
>> So: not possible to have very little downtime if you have a database with al lot rows containing text as datatype, as pg_upgrade needs 12hr for 24 milj rows in pg_largeobject.
> We need to get terminology straight, as at the moment your posts are very confusing. In PostgreSQL large objects and text are not the same. Text is basically varchar without a specified length limit. Large object is a blob (but not what SQL calls a BLOB)--it is kind of like a file stored outside the normal table mechanism, and provides facilities for partial reads, etc: https://www.postgresql.org/docs/15/largeobjects.html. There are a number of ways to wind up with references to large objects all deleted, but the orphaned large objects still in the database.
>
> First thing you should do: run lovacuum -n to find out if you have orphaned large objects. If so, start cleaning those up, then see how long pg_upgrade takes.
>
> Second, what's your hardware? I really don't see dump & restore of a 1TB database taking 6 hours.
>
>> Alsready tried to use --link and --jobs, but you cannot ommit the "select lo_unlink ...." for every rows containing datatype text in your database that the pg_* program creates in the export/dump file.
> Terminology again, or are you conflating two different issues? pg_upgrade --link does not create a dump file.

In response to

Responses

Browse pgsql-admin by date

  From Date Subject
Next Message Ron 2023-07-20 17:22:39 Re: Upgrade from PG12 to PG
Previous Message Ron 2023-07-20 16:59:25 Re: Upgrade from PG12 to PG