Re: pg_upgrade failing for 200+ million Large Objects

From: Laurenz Albe <laurenz(dot)albe(at)cybertec(dot)at>
To: Michael Banck <mbanck(at)gmx(dot)net>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Nathan Bossart <nathandbossart(at)gmail(dot)com>, vignesh C <vignesh21(at)gmail(dot)com>, "Kumar, Sachin" <ssetiya(at)amazon(dot)com>, Robins Tharakan <tharakan(at)gmail(dot)com>, Jan Wieck <jan(at)wi3ck(dot)info>, Bruce Momjian <bruce(at)momjian(dot)us>, Andrew Dunstan <andrew(at)dunslane(dot)net>, Magnus Hagander <magnus(at)hagander(dot)net>, Peter Eisentraut <peter(dot)eisentraut(at)enterprisedb(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: pg_upgrade failing for 200+ million Large Objects
Date: 2024-03-27 09:53:51
Message-ID: a71f1582102a9fafcdf98094b8a221c5438e0b42.camel@cybertec.at
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Wed, 2024-03-27 at 10:20 +0100, Michael Banck wrote:
> Also, is there a chance this is going to be back-patched? I guess it
> would be enough if the ugprade target is v17 so it is less of a concern,
> but it would be nice if people with millions of large objects are not
> stuck until they are ready to ugprade to v17.

It is a quite invasive patch, and it adds new features (pg_restore in
bigger transaction patches), so I think this is not for backpatching,
desirable as it may seem from the usability angle.

Yours,
Laurenz Albe

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Bertrand Drouvot 2024-03-27 10:11:58 Re: Introduce XID age and inactive timeout based replication slot invalidation
Previous Message David Rowley 2024-03-27 09:47:54 Re: Properly pathify the union planner