Re: Improving pg_dump performance when handling large numbers of LOBs

From: Shaheed Haque <shaheedhaque(at)gmail(dot)com>
To: Ron Johnson <ronljohnsonjr(at)gmail(dot)com>
Cc: "pgsql-generallists(dot)postgresql(dot)org" <pgsql-general(at)lists(dot)postgresql(dot)org>
Subject: Re: Improving pg_dump performance when handling large numbers of LOBs
Date: 2024-02-06 07:31:40
Message-ID: CAHAc2jdGFMGnhtuZWP2bqv06trjA1RpUcjFBqFZ9S+Wpk9q7CA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Might it be worth a modest amount of time using some basic profiling to see
where the time is going? A week is a looonnngg time, even for 150e6
operations. For example, if there an unexpectedly high IO load, some
temporary M.2 storage might help?

On Tue, 6 Feb 2024, 01:36 Ron Johnson, <ronljohnsonjr(at)gmail(dot)com> wrote:

> On Mon, Feb 5, 2024 at 2:01 PM Wyatt Tellis <wyatt(dot)tellis(at)gmail(dot)com>
> wrote:
>
>> Hi,
>>
>> We've inherited a series of legacy PG 12 clusters that each contain a
>> database that we need to migrate to a PG 15 cluster. Each database contains
>> about 150 million large objects totaling about 250GB.
>>
>
> 250*10^9 / (150*10^6) = 1667 bytes. That's *tiny*.
>
> Am I misunderstanding you?
>
>>

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Fabrice Chapuis 2024-02-06 08:03:02 Problem managing slots in Patroni
Previous Message veem v 2024-02-06 05:55:05 Re: How to do faster DML