Re: Horribly slow pg_upgrade performance with many Large Objects

From: Hannu Krosing <hannuk(at)google(dot)com>
To: Nathan Bossart <nathandbossart(at)gmail(dot)com>
Cc: PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>
Subject: Re: Horribly slow pg_upgrade performance with many Large Objects
Date: 2025-04-08 16:13:28
Message-ID: CAMT0RQQxU69Ph-KB_opfKPXKt2N+yk_Us9q2or6wUX6bwotQHw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Tue, Apr 8, 2025 at 5:46 PM Nathan Bossart <nathandbossart(at)gmail(dot)com> wrote:
>
> On Tue, Apr 08, 2025 at 09:35:24AM +0200, Hannu Krosing wrote:
> > On Tue, Apr 8, 2025 at 12:17 AM Nathan Bossart <nathandbossart(at)gmail(dot)com> wrote:
> >> That being said, I
> >> regularly hear about slow upgrades with many LOs, so I think it'd be
> >> worthwhile to try to improve matters in v19.
> >
> > Changing the LO export to dumping pg_largeobject_metadata content
> > instead of creating the LOs should be a nice small change confined to
> > pg_dump --binary-upgrade only so perhaps we could squeeze it in v18
> > still.
>
> Feature freeze for v18 was ~4 hours ago, so unfortunately this is v19
> material at this point.

Sure. But I actually think that this is something that should be
back-ported to at least all supported versions at some pon.
Possibly made dependent on some environment flag so only people that
desperately need it will get it.

Btw, who would be the right person(s) to ask questions about internals
of pg_dump ?
I have a few more things in the pipeline to add there and would like
to make sure that I have the right approach.

------
Hannu

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Tom Lane 2025-04-08 16:13:34 Re: Horribly slow pg_upgrade performance with many Large Objects
Previous Message Bruce Momjian 2025-04-08 16:11:02 Re: Feature freeze