Re: Regarding db dump with Fc taking very long time to completion

From: Durgamahesh Manne <maheshpostgres9(at)gmail(dot)com>
To: Luca Ferrari <fluca1978(at)gmail(dot)com>
Cc: PostgreSQL mailing lists <pgsql-general(at)postgresql(dot)org>
Subject: Re: Regarding db dump with Fc taking very long time to completion
Date: 2019-10-16 09:27:00
Message-ID: CAJCZkoKE+q6C67EwKLcn+H7xC2NaHs8pp7Yt0G1VK++EtypZRg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Fri, Aug 30, 2019 at 4:12 PM Luca Ferrari <fluca1978(at)gmail(dot)com> wrote:

> On Fri, Aug 30, 2019 at 11:51 AM Durgamahesh Manne
> <maheshpostgres9(at)gmail(dot)com> wrote:
> > Logical dump of that table is taking more than 7 hours to be completed
> >
> > I need to reduce to dump time of that table that has 88GB in size
>
> Good luck!
> I would see two possible solutions to the problem:
> 1) use physical backup and switch to incremental (e..g, pgbackrest)
> 2) partition the table and backup single pieces, if possible
> (constraints?) and be assured it will become hard to maintain (added
> partitions, and so on).
>
> Are all of the 88 GB be written during a bulk process? I guess no, so
> maybe partitioning you can avoid locking the whole dataset and reduce
> contention (and thus time).
>
> Luca
>

Hi respected postgres team

Are all of the 88 GB be written during a bulk process?
NO
Earlier table size was 88gb
Now table size is about 148 GB
Is there any way to reduce dump time when i take dump of the table which
has 148gb in size without creating partiton on that table has 148gb in size
?

Regards
Durgamahesh Manne

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Vicente Juan Tomas Monserrat 2019-10-16 09:29:04 connection timeout with psycopg2
Previous Message Luca Ferrari 2019-10-16 06:26:41 Re: Securing records using linux grou permissions