From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Bruce Momjian <bruce(at)momjian(dot)us> |
Cc: | Jan Wieck <jan(at)wi3ck(dot)info>, Magnus Hagander <magnus(at)hagander(dot)net>, Robins Tharakan <tharakan(at)gmail(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)enterprisedb(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: pg_upgrade failing for 200+ million Large Objects |
Date: | 2021-03-20 16:53:40 |
Message-ID: | 227228.1616259220@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Bruce Momjian <bruce(at)momjian(dot)us> writes:
> On Sat, Mar 20, 2021 at 11:23:19AM -0400, Tom Lane wrote:
>> Of course, that just reduces the memory consumption on the client
>> side; it does nothing for the locks. Can we get away with releasing the
>> lock immediately after doing an ALTER OWNER or GRANT/REVOKE on a blob?
> Well, in pg_upgrade mode you can, since there are no other cluster
> users, but you might be asking for general pg_dump usage.
Yeah, this problem doesn't only affect pg_upgrade scenarios, so it'd
really be better to find a way that isn't dependent on binary-upgrade
mode.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Jan Wieck | 2021-03-20 16:55:24 | Re: pg_upgrade failing for 200+ million Large Objects |
Previous Message | Bruce Momjian | 2021-03-20 16:45:36 | Re: pg_upgrade failing for 200+ million Large Objects |