Improving pg_dump performance when handling large numbers of LOBs

From: Wyatt Tellis <wyatt(dot)tellis(at)gmail(dot)com>
To: pgsql-general(at)lists(dot)postgresql(dot)org
Subject: Improving pg_dump performance when handling large numbers of LOBs
Date: 2024-02-05 19:00:21
Message-ID: CANCMp8yTnRrVtEQnMMfxw_pxRSXsv40TWFUbnaR4rvDnBzJ2eg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Hi,

We've inherited a series of legacy PG 12 clusters that each contain a
database that we need to migrate to a PG 15 cluster. Each database contains
about 150 million large objects totaling about 250GB. When using pg_dump
we've found that it takes a couple of weeks to dump out this much data.
We've tried using the jobs option with the directory format but that seems
to save each LOB separately which makes moving the resulting dump to
another location unwieldy. Has anyone else had to deal with dumping a
database with these many LOBs? Are there any suggestions for how to
improve performance?

Thanks,

Wyatt

Responses

Browse pgsql-general by date

  From Date Subject
Next Message veem v 2024-02-05 19:24:35 Re: How to do faster DML
Previous Message veem v 2024-02-05 18:56:46 Re: Question on partitioning