From: | Jean-Marc Lessard <Jean-Marc(dot)Lessard(at)ultra-ft(dot)com> |
---|---|
To: | Michael Lewis <mlewis(at)entrata(dot)com> |
Cc: | pgsql-general <pgsql-general(at)postgresql(dot)org> |
Subject: | RE: Does the work made by vaccum in the current pass is lost when interrupted? |
Date: | 2020-10-16 12:05:42 |
Message-ID: | QB1PR01MB36334C6FE1F518EDE242867E94030@QB1PR01MB3633.CANPRD01.PROD.OUTLOOK.COM |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Thank you Michael for your suggestion,
I will reduce vacuum_cost_delay and increasing vacuum_cost_limit to speedup vacuum.
But the I/O system is very solicitated on PROD system and I do not want to impact end user performance.
If aggressive cost_delay and vacuum_cost_limit slow down significatively end user performance, I may have to kill the vacuum, but would like to keep as much vacuum work as possible.
My concern is about the size of the max_dead_tuples buffer determined by the autovacuum_work_mem
* autovacuum_work_mem is currently 1GB (default to maintenance_work_mem)
When I ran a vacuum on a DEV system and it made a single “scanning heap/vacuuming indexes/vacuuming heap” cycle of about 12 hours.
The vacuuming indexes phase is half the total time (6 hrs).
Q1: I guess that if I have to kill the vacuum midway, I will lose all the work it did during the scanning heap step? Right?
Q2: If I reduce autovacuum_work_mem to 32MB, is there any side effect performance on the vacuum?
thanks
From | Date | Subject | |
---|---|---|---|
Next Message | Stephen Frost | 2020-10-16 15:33:53 | Re: UUID with variable length |
Previous Message | Laurenz Albe | 2020-10-16 08:47:39 | Re: Date Format 9999-12-31-00.00.00.000000 |