From: | Imre Samu <pella(dot)samu(at)gmail(dot)com> |
---|---|
To: | PostgreSQL mailing lists <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Regarding db dump with Fc taking very long time to completion |
Date: | 2019-10-16 12:26:19 |
Message-ID: | CAJnEWwkRdJ6cQ2o0jKAD1ktae9XY+PSR7DDWr1aEgcG=_iKdPg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hi,
Maybe - you can re-use this backup tricks.
"Speeding up dump/restore process"
https://www.depesz.com/2009/09/19/speeding-up-dumprestore-process/
for example:
"""
*Idea was: All these tables had primary key based on serial. We could
easily get min and max value of the primary key column, and then split it
into half-a-million-ids “partitions", then dump them separately using:*
*psql -qAt -c "COPY ( SELECT * FROM TABLE WHERE id BETWEEN x AND y) TO
STDOUT" | gzip -c - > TABLE.x.y.dump*
"""
best,
Imre
Durgamahesh Manne <maheshpostgres9(at)gmail(dot)com> ezt írta (időpont: 2019. aug.
30., P, 11:51):
> Hi
> To respected international postgresql team
>
> I am using postgresql 11.4 version
> I have scheduled logical dump job which runs daily one time at db level
> There was one table that has write intensive activity for every 40 seconds
> in db
> The size of the table is about 88GB
> Logical dump of that table is taking more than 7 hours to be completed
>
> I need to reduce to dump time of that table that has 88GB in size
>
>
> Regards
> Durgamahesh Manne
>
>
>
>
From | Date | Subject | |
---|---|---|---|
Next Message | Sonam Sharma | 2019-10-16 12:32:33 | Analyze and vaccum |
Previous Message | Thomas Kellerer | 2019-10-16 12:03:06 | A little confusion about JSON Path |