Re: Fix pg_upgrade to preserve datdba

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Jan Wieck <jan(at)wi3ck(dot)info>
Cc: Magnus Hagander <magnus(at)hagander(dot)net>, Robins Tharakan <tharakan(at)gmail(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)enterprisedb(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Fix pg_upgrade to preserve datdba
Date: 2021-03-21 19:56:14
Message-ID: 399567.1616356574@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Jan Wieck <jan(at)wi3ck(dot)info> writes:
> So let's focus on the actual problem of running out of XIDs and memory
> while doing the upgrade involving millions of small large objects.

Right. So as far as --single-transaction vs. --create goes, that's
mostly a definitional problem. As long as the contents of a DB are
restored in one transaction, it's not gonna matter if we eat one or
two more XIDs while creating the DB itself. So we could either
relax pg_restore's complaint, or invent a different switch that's
named to acknowledge that it's not really only one transaction.

That still leaves us with the lots-o-locks problem. However, once
we've crossed the Rubicon of "it's not really only one transaction",
you could imagine that the switch is "--fewer-transactions", and the
idea is for pg_restore to commit after every (say) 100000 operations.
That would both bound its lock requirements and greatly cut its XID
consumption.

The work you described sounded like it could fit into that paradigm,
with the additional ability to run some parallel restore tasks
that are each consuming a bounded number of locks.

regards, tom lane

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Tom Lane 2021-03-21 20:32:31 Re: [HACKERS] Custom compression methods (mac+lz4.h)
Previous Message Matthijs van der Vleuten 2021-03-21 19:41:47 [PATCH] In psql \?, add [+] annotation where appropriate