From: | Justin Pryzby <pryzby(at)telsasoft(dot)com> |
---|---|
To: | "Tharakan, Robins" <tharar(at)amazon(dot)com> |
Cc: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: pg_upgrade failing for 200+ million Large Objects |
Date: | 2021-03-09 20:08:19 |
Message-ID: | 20210309200819.GO2021@telsasoft.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Wed, Mar 03, 2021 at 11:36:26AM +0000, Tharakan, Robins wrote:
> While reviewing a failed upgrade from Postgres v9.5 (to v9.6) I saw that the
> instance had ~200 million (in-use) Large Objects. I was able to reproduce
> this on a test instance which too fails with a similar error.
If pg_upgrade can't handle millions of objects/transactions/XIDs, that seems
like a legitimate complaint, since apparently the system is working okay
otherwise.
But it also seems like you're using it outside the range of its intended use
(See also [1]). I'm guessing that not many people are going to spend time
running tests of pg_upgrade, each of which takes 25hr, not to mention some
multiple of 128GB RAM+swap.
Creating millions of large objects was too slow for me to test like this:
| time { echo 'begin;'; for a in `seq 1 99999`; do echo '\lo_import /dev/null'; done; echo 'commit;'; } |psql -qh /tmp postgres&
This seems to be enough for what's needed:
| ALTER SYSTEM SET fsync=no; ALTER SYSTEM SET full_page_writes=no; SELECT pg_reload_conf();
| INSERT INTO pg_largeobject_metadata SELECT a, 0 FROM generate_series(100000, 200111222)a;
Now, testing the pg_upgrade was killed after runnning 100min and using 60GB
RAM, so you might say that's a problem too. I converted getBlobs() to use a
cursor, like dumpBlobs(), but it was still killed. I think a test case and a
way to exercizes this failure with a more reasonable amount of time and
resources might be a prerequisite for a patch to fix it.
pg_upgrade is meant for "immediate" upgrades, frequently allowing upgrade in
minutes, where pg_dump |pg_restore might take hours or days. There's two
components to consider: the catalog/metadata part, and the data part. If the
data is large (let's say more than 100GB), then pg_upgrade is expected to be an
improvement over the "dump and restore" process, which is usually infeasible
for large DBs measure in TB.
But the *catalog* part is large, and pg_upgrade still has to run pg_dump, and
pg_restore. The time to do this might dominate over the data part. Our own
customers DBs are 100s of GB to 10TB. For large customers, pg_upgrade takes
45min. In the past, we had tables with many column defaults, which caused the
dump+restore to be slow at a larger fraction of customers.
If it were me, in an EOL situation, I would look at either: 1) find a way to do
dump+restore rather than pg_upgrade; and/or, 2) separately pg_dump the large
objects, drop as many as you can, then pg_upgrade the DB, then restore the
large objects. (And find a better way to store them in the future).
I was able to hack pg_upgrade to call pg_restore --single (with a separate
invocation to handle --create). That passes tests...but I can't say much
beyond that.
Regarding your existing patch: "make check" only tests SQL features.
For development, you'll want to configure like:
|./configure --enable-debug --enable-cassert --enable-tap-tests
And then use "make check-world", and in particular:
time make check -C src/bin/pg_resetwal
time make check -C src/bin/pg_upgrade
I don't think pg_restore needs a user-facing option for XIDs. I think it
should "just work", since a user might be as likely to shoot themselves in the
foot with a commandline option as they are to make an upgrade succeed that
would otherwise fail. pg_upgrade has a --check mode, and if that passes, the
upgrade is intended to work, and not fail halfway through between the schema
dump and restore, with the expectation that the user know to rerun with some
commandline flags. If you pursue the patch with setting a different XID
threshold, maybe you could count the number of objects to be created, or
transactions to be used, and use that as the argument to resetxlog ? I'm not
sure, but pg_restore -l might be a good place to start looking.
I think a goal for this patch should be to allow an increased number of
objects to be handled by pg_upgrade. Large objects may be a special case, and
increasing the number of other objects to be restored to the 100s of millions
might be unimportant.
--
Justin
[1] https://www.postgresql.org/message-id/502641.1606334432%40sss.pgh.pa.us
| Does pg_dump really have sane performance for that situation, or
| are we soon going to be fielding requests to make it not be O(N^2)
| in the number of listed tables?
Attachment | Content-Type | Size |
---|---|---|
0001-pg_dump-use-a-cursor-in-getBlobs.patch | text/x-diff | 5.2 KB |
From | Date | Subject | |
---|---|---|---|
Next Message | Peter Geoghegan | 2021-03-09 20:22:25 | Re: Lowering the ever-growing heap->pd_lower |
Previous Message | David G. Johnston | 2021-03-09 20:05:17 | Re: [patch] [doc] Minor variable related cleanup and rewording of plpgsql docs |