Re: Horribly slow pg_upgrade performance with many Large Objects

From: Nathan Bossart <nathandbossart(at)gmail(dot)com>
To: Hannu Krosing <hannuk(at)google(dot)com>
Cc: PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>
Subject: Re: Horribly slow pg_upgrade performance with many Large Objects
Date: 2025-04-07 22:17:03
Message-ID: Z_RO37whB1L2LbiD@nathan
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Mon, Apr 07, 2025 at 10:33:47PM +0200, Hannu Krosing wrote:
> The obvious solution would be to handle the table
> `pg_largeobject_metadata` the same way as we currently handle
> `pg_largeobject `by not doing anything with it in `pg_dump
> --binary-upgrade` and just handle the contents it like we do for user
> tables in pg_upgrade itself.
>
> This should work fine for all source database versions starting from PgSQL v12.

Unfortunately, the storage format for aclitem changed in v16, so this would
need to be restricted to upgrades from v16 and newer. That being said, I
regularly hear about slow upgrades with many LOs, so I think it'd be
worthwhile to try to improve matters in v19.

--
nathan

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Jacob Champion 2025-04-07 22:26:47 Re: [PoC] Federated Authn/z with OAUTHBEARER
Previous Message Tomas Vondra 2025-04-07 21:58:58 Re: Draft for basic NUMA observability