From: | Alvaro Herrera <alvherre(at)alvh(dot)no-ip(dot)org> |
---|---|
To: | M Sarwar <sarwarmd02(at)outlook(dot)com> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Holger Jakobs <holger(at)jakobs(dot)com>, "pgsql-admin(at)lists(dot)postgresql(dot)org" <pgsql-admin(at)lists(dot)postgresql(dot)org>, "rajeshkumar(dot)dba09(at)gmail(dot)com" <rajeshkumar(dot)dba09(at)gmail(dot)com> |
Subject: | Re: Pg_dump |
Date: | 2023-12-07 19:39:06 |
Message-ID: | 202312071939.dwcjj5lwivaa@alvherre.pgsql |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
On 2023-Dec-07, M Sarwar wrote:
> I agree with Tom. This is making the difference. I ran into this scenario several times in the past.
> But whole database is becoming slow when the dump is happening .
For large databases with very high rate of updates, a running pg_dump
can prevent vacuum from removing old versions of rows. This can make
the operations slower because of accumulation of bloat.
For such situations, pg_dump is not really recommended. It's better to
use a physical backup (say, pgbarman), or if you really need a pg_dump
output file for some reason, create a replica (with _no_
hot_standby_feedback) and run pg_dump there.
--
Álvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/
"I'm always right, but sometimes I'm more right than other times."
(Linus Torvalds)
https://lore.kernel.org/git/Pine(dot)LNX(dot)4(dot)58(dot)0504150753440(dot)7211(at)ppc970(dot)osdl(dot)org/
From | Date | Subject | |
---|---|---|---|
Next Message | Rajesh Kumar | 2023-12-07 19:52:16 | Pg_dump |
Previous Message | M Sarwar | 2023-12-07 19:00:24 | Re: Pg_dump |