From: | Massimo Ortensi <mortensi(at)unimaticaspa(dot)it> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | pgsql-admin(at)lists(dot)postgresql(dot)org |
Subject: | Re: Out of memory error during pg_upgrade in big DB with large objects |
Date: | 2022-11-22 13:27:09 |
Message-ID: | 1oxTIr-0073TD-ER@unimaticaspa.it |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
I tried (even if vers 15 is not feasible at the moment as we tested only
vers 14).
It ended with the same Out of memory failure, just more quickly ( 1 hour
instead of 12 hours)
Il 21/11/2022 18:30, Tom Lane ha scritto:
> Massimo Ortensi <mortensi(at)unimaticaspa(dot)it> writes:
>> I'm trying to upgrade a huge DB from postgres 10 to 14
>> This cluster is 70+ TB, with one database having more than 2 billion
>> records in pg_largeobject
>> I'm trying pg_upgrade in hard link mode, but the dump of databas schema
>> phase always fails with
>> pg_dump: error: query failed: out of memory for query result
>> pg_dump: error: query was: SELECT l.oid, (SELECT rolname FROM
>> pg_catalog.pg_roles WHERE oid = l.lomowner) AS rolname, (SELECT
>> pg_catalog.array_agg(acl ORDER BY row_n) FROM (SELECT acl, row_n FROM
> FWIW, this query was rewritten pretty substantially in v15.
> It's still going to produce a row per large object, but it
> should be a lot narrower because most of the ACL-wrangling
> now happens somewhere else. I don't know if migrating to
> v15 instead of v14 is an option for you, and I can't promise
> that that'd be enough savings to fix it anyway. But it's
> something to think about.
>
> regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Franz Xaver | 2022-11-23 02:23:53 | Default Sort order Problem - Newbie ;-) |
Previous Message | Satalabaha Postgres | 2022-11-22 06:26:59 | Re: Finding query execution time using \timing and EXPLAIN ANALYZE.. |