Re: Upgrade from PG12 to PG

From: Ron <ronljohnsonjr(at)gmail(dot)com>
To: pgsql-admin(at)lists(dot)postgresql(dot)org
Subject: Re: Upgrade from PG12 to PG
Date: 2023-07-20 17:22:39
Message-ID: 78789644-f55a-51eb-8507-58415064414d@gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

Jeff,

It does a "--schema-only" dump.  The /gianter/ the schema, the longer that
"pg_dump --schema-only" takes.

Note also that there's a known issue with pg_upgrade and /millions/ of Large
Objects (not bytea or text, but lo_* columns).

On 7/20/23 12:05, Jef Mortelle wrote:
> running /usr/lib/postgresql15/bin/pg_upgrade -v -p 5431 -P 5432 -k
>
> gives you in the output :
>
> ...
> "/usr/lib/postgresql15/bin/pg_dump" --host /pg/PG15 --port 5431 --username
> postgres --schema-only --quote-all-identifiers --binary-upgrade
> --format=custom --verbose
> --file="/pg/PG15/system/pg_upgrade_output.d/20230720T184333.610/dump/pg_upgrade_dump_16385.custom"
> 'dbname=dbname' >>
> "/pg/PG15/system/pg_upgrade_output.d/20230720T184333.610/log/pg_upgrade_dump_16385.log"
> 2>&1
> ...
>
>
> so, yes pg_ugrade start a pg_dump session,
>
>
> even a pg_restore at the end, runs about 7 hours
>
> "/usr/lib/postgresql15/bin/pg_restore" --host /pg/PG15 --port 5432
> --username postgres --create --exit-on-error --verbose --dbname template1
> "/pg/PG15/system/pg_upgrade_output.d/20230720T184333.610/dump/pg_upgrade_dump_16385.custom"
> >>
> "/pg/PG15/system/pg_upgrade_output.d/20230720T184333.610/log/pg_upgrade_dump_16385.log"
> 2>&1
>
> with a lot of lines in the logfiles  like:  pg_restore: executing BLOB
> 11108809
>
>
> Server is a VM server, my VM has 64GB SuseSLES  attached to a SAN with SSD
> disk (Hp3Par)
>
> 2)vacuumlo
>
> lovacum -n =>  I have vacuumlo, running this:
>
> vacuumlo  dbname  -v
>
> postgres(at)dbfluxd02:/pg/data> vacuumlo  dflux -v
> Connected to database "dflux"
> Checking message_oid in fluxnaf.flux_message_content
> Checking payload_oid in fluxnaf.bridge_payload_content
> Successfully removed 0 large objects from database "dflux".
>
> and again same issues
>
>
>
> On 20/07/2023 16:51, Scott Ribe wrote:
>>> On Jul 20, 2023, at 7:46 AM, Jef Mortelle <jefmortelle(at)gmail(dot)com> wrote:
>>>
>>> So: not possible to have very little downtime if you have a database
>>> with al lot rows containing text  as datatype, as pg_upgrade needs 12hr
>>> for 24 milj rows in pg_largeobject.
>> We need to get terminology straight, as at the moment your posts are very
>> confusing. In PostgreSQL large objects and text are not the same. Text is
>> basically varchar without a specified length limit. Large object is a
>> blob (but not what SQL calls a BLOB)--it is kind of like a file stored
>> outside the normal table mechanism, and provides facilities for partial
>> reads, etc: https://www.postgresql.org/docs/15/largeobjects.html. There
>> are a number of ways to wind up with references to large objects all
>> deleted, but the orphaned large objects still in the database.
>>
>> First thing you should do: run lovacuum -n to find out if you have
>> orphaned large objects. If so, start cleaning those up, then see how long
>> pg_upgrade takes.
>>
>> Second, what's your hardware? I really don't see dump & restore of a 1TB
>> database taking 6 hours.
>>
>>> Alsready tried to use --link and --jobs, but you cannot ommit the
>>> "select  lo_unlink ...."   for every rows containing datatype text in
>>> your database that the pg_* program creates in the export/dump file.
>> Terminology again, or are you conflating two different issues? pg_upgrade
>> --link does not create a dump file.
>
>

--
Born in Arizona, moved to Babylonia.

In response to

Browse pgsql-admin by date

  From Date Subject
Next Message Scott Ribe 2023-07-20 18:34:18 Re: Upgrade from PG12 to PG
Previous Message Jef Mortelle 2023-07-20 17:05:54 Re: Upgrade from PG12 to PG