From: | Антон Глушаков <a(dot)glushakov86(at)gmail(dot)com> |
---|---|
To: | pgsql-admin(at)lists(dot)postgresql(dot)org |
Subject: | pg_dump --binary-upgrade out of memory |
Date: | 2024-02-12 15:36:20 |
Message-ID: | CAHnOmaeCZN0tOEfmu3VOLjDujfY2jbS0HaEfwTboY1knhfwzvQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
Hi.
I encountered the problem of not being able to upgrade my instance (14->15)
via pg_upgrade.
The utility crashed with an error in out of memory.
After researching a bit I found that this happens at the moment of export
schema with pg_dump.
Then I tried to manually perform a dump schema with the parameter
--binary-upgrade option and also got an out of memory.
Digging a little deeper, I discovered quite a large number of blob objects
in the database (pg_largeobject 10GB and pg_largeobject_metadata 1GB (31kk
rows))
I was able to reproduce the problem on a clean server by simply putting
some random data in pg_largeobject_metadata
$insert into pg_largeobject_metadata (select i,16390 from
generate_series(107659,34274365) as i);
$pg_dump --binary-upgrade --format=custom -d mydb -s -f tmp.dmp
and after 1-2 min get out of memory ( i tried on server with 4 and 8 gb RAM)
Perhaps this is a bug? How can I perform an upgrade?
From | Date | Subject | |
---|---|---|---|
Next Message | hellen jiang | 2024-02-13 15:44:12 | migrating data from non-partitioned table to partitioned table in Aurora Postgresql |
Previous Message | Tom Lane | 2024-02-11 00:20:54 | Re: Temp table + inde + FDW table on Redshift: MOVE BACKWARD ALL IN not supported |