| From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> | 
|---|---|
| To: | Massimo Ortensi <mortensi(at)unimaticaspa(dot)it> | 
| Cc: | pgsql-admin(at)lists(dot)postgresql(dot)org | 
| Subject: | Re: Out of memory error during pg_upgrade in big DB with large objects | 
| Date: | 2022-11-21 17:30:20 | 
| Message-ID: | 3215516.1669051820@sss.pgh.pa.us | 
| Views: | Whole Thread | Raw Message | Download mbox | Resend email | 
| Thread: | |
| Lists: | pgsql-admin | 
Massimo Ortensi <mortensi(at)unimaticaspa(dot)it> writes:
> I'm trying to upgrade a huge DB from postgres 10 to 14
> This cluster is 70+ TB, with one database having more than 2 billion 
> records in pg_largeobject
> I'm trying pg_upgrade in hard link mode, but the dump of databas schema 
> phase always fails with
> pg_dump: error: query failed: out of memory for query result
> pg_dump: error: query was: SELECT l.oid, (SELECT rolname FROM 
> pg_catalog.pg_roles WHERE oid = l.lomowner) AS rolname, (SELECT 
> pg_catalog.array_agg(acl ORDER BY row_n) FROM (SELECT acl, row_n FROM 
FWIW, this query was rewritten pretty substantially in v15.
It's still going to produce a row per large object, but it
should be a lot narrower because most of the ACL-wrangling
now happens somewhere else.  I don't know if migrating to
v15 instead of v14 is an option for you, and I can't promise
that that'd be enough savings to fix it anyway.  But it's
something to think about.
regards, tom lane
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Ninad Shah | 2022-11-21 18:57:38 | Re: Client IP in Patroni | 
| Previous Message | Massimo Ortensi | 2022-11-21 16:37:50 | Out of memory error during pg_upgrade in big DB with large objects |