From: | Michael Lewis <mlewis(at)entrata(dot)com> |
---|---|
To: | Jean-Marc Lessard <Jean-Marc(dot)Lessard(at)ultra-ft(dot)com> |
Cc: | pgsql-general <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Does the work made by vaccum in the current pass is lost when interrupted? |
Date: | 2020-10-09 14:47:33 |
Message-ID: | CAHOFxGodLAD0cK1dKvMd-01e+ObMAx9U-kGn-COjJfEgkwUpcQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Thu, Oct 8, 2020, 1:16 PM Jean-Marc Lessard <
Jean-Marc(dot)Lessard(at)ultra-ft(dot)com> wrote:
> I have a large table (billions of records) which has not been vacuum and
> bloated.
>
> Vacuum scale factor was left at the default.
>
>
>
> I ran a vacuum on a DEV system and it makes several passes (scanning
> heap/vacuuming indexes/vacumming heap) which take more than an hour each.
>
> On a PROD system, I may have to kill the job midway.
>
> Should I reduce the autovacuum_work_mem of my session? Currently 1GB
>
Increase if you can. You want to maximize the work being done before it
needs to pause.
Have you tuned any settings related to vacuum? If your I/O system can
handle it, turning cost delay very low, especially PG12+ where it can be
less than 1ms, may be helpful. Otherwise you might reduce to 1ms and also
increase cost limit so you do more work before stopping.
>
From | Date | Subject | |
---|---|---|---|
Next Message | Guyren Howe | 2020-10-09 16:25:58 | Any interest in adding match_recognize? |
Previous Message | Allan Kamau | 2020-10-09 11:31:43 | Re: Does the work made by vaccum in the current pass is lost when interrupted? |