RE: Upgrade from 11.3 to 13.1 failed with out of memory

From: Mihalidesová Jana <jana(dot)mihalidesova(at)cetin(dot)cz>
To: Magnus Hagander <magnus(at)hagander(dot)net>
Cc: "pgsql-general(at)lists(dot)postgresql(dot)org" <pgsql-general(at)lists(dot)postgresql(dot)org>
Subject: RE: Upgrade from 11.3 to 13.1 failed with out of memory
Date: 2021-04-07 09:24:56
Message-ID: DBBPR08MB4888B41EE7F217CE3AED1C7190759@DBBPR08MB4888.eurprd08.prod.outlook.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Open

Hi,

Thanks a lot for information.

Best regards,
Jana

-----Original Message-----
From: Magnus Hagander <magnus(at)hagander(dot)net>
Sent: Tuesday, April 6, 2021 3:23 PM
To: Mihalidesová Jana <jana(dot)mihalidesova(at)cetin(dot)cz>
Cc: pgsql-general(at)lists(dot)postgresql(dot)org
Subject: Re: Upgrade from 11.3 to 13.1 failed with out of memory

On Tue, Apr 6, 2021 at 3:08 PM Mihalidesová Jana <jana(dot)mihalidesova(at)cetin(dot)cz> wrote:
>
> Open
>
>
> Hi,
>
>
>
> I have aproximetly 560GB large database and try to upgrade it from 11.3 to 13.1. I’ve successfully upgraded dev,test and ref environment but on the production pg_dump failed with out of memory. Yes, of course, the dev,test and ref are much much smaller then production database.
>
> We are using OID data type so there’s a lot of large objects. pg_largeobject it’s 59GB large.
>
> The upgrade process fail during the pg_dump schemas_only so I’m confused why it’s not enough 35GB RAM which is free on the server when there’s no data. When I tried to run same pg_dump command by hand as during upgrade it fails on line pg_dump: reading large objects.
>
>
>
> Creating dump of global objects "/pgsql/bin/13.1_XXXX/bin/pg_dumpall" --host /pgsql/data/XXXX --port 50432 --username XXXXXX --globals-only --quote-all-identifiers --binary-upgrade --verbose -f pg_upgrade_dump_globals.sql >> "pg_upgrade_utility.log" 2>&1
>
> ok
>
> Creating dump of database schemas
>
> "/pgsql/bin/13.1_XXXX/bin/pg_dump" --host /pgsql/data/XXXX --port
> 50432 --username XXXXXX --schema-only --quote-all-identifiers
> --binary-upgrade --format=custom --verbose
> --file="pg_upgrade_dump_16384.custom" 'dbname=XXXX' >>
> "pg_upgrade_dump_16384.log" 2>&1
>
>
>
> *failure*
>
> There were problems executing ""/pgsql/bin/13.1_XXXX/bin/pg_dump" --host /pgsql/data/XXXX --port 50432 --username pgpnip --schema-only --quote-all-identifiers --binary-upgrade --format=custom --verbose --file="pg_upgrade_dump_16384.custom" 'dbname=XXXX' >> "pg_upgrade_dump_16384.log" 2>&1"
>
>
>
>
>
> Do you have any idea how to upgrade the database? This is my upgrade command:
>
>
>
> /pgsql/bin/13.1_XXXX/bin/pg_upgrade -k -b /pgsql/bin/11.3_XXXX/bin -B
> /pgsql/bin/13.1_XXXX/bin -d /pgsql/data/XXXX -D
> /pgsql/data/XXXX/XXXX.new
>

This is unfortunately a known limitation in pg_dump (and therefor by proxy it becomes a problem with pg_upgrade) when you have many large objects. It doesn't really matter how big they are, it matters how
*many* they are. It takes a long time and uses crazy amounts of memory, but that's unfortunately where it's at. You'd have the same problem with a plain dump/reload as well, not just the "binary upgrade mode".

There's been some recent work on trying to find a remedy for this, but nothing is available at this point. You'll need to either trim the number of objects if you can (by maybe manually dumping them out to files before the restore and then reloading them back in later), or just add more memory/swap to the machine.

Long term you should probably consider switching to using bytea columns when you have that many objects.

--
Magnus Hagander
Me: https://www.hagander.net/
Work: https://www.redpill-linpro.com/

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Ciaran Doherty 2021-04-07 11:15:37 Using indexes through a plpgsql procedure
Previous Message Laurenz Albe 2021-04-07 08:50:47 Re: Open source licenses