From: | "Imseih (AWS), Sami" <simseih(at)amazon(dot)com> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com>, Joe Conway <mail(at)joeconway(dot)com> |
Cc: | Michael Banck <mbanck(at)gmx(dot)net>, Laurenz Albe <laurenz(dot)albe(at)cybertec(dot)at>, Frédéric Yhuel <frederic(dot)yhuel(at)dalibo(dot)com>, "Nathan Bossart" <nathandbossart(at)gmail(dot)com>, Melanie Plageman <melanieplageman(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>, David Rowley <dgrowleyml(at)gmail(dot)com> |
Subject: | Re: New GUC autovacuum_max_threshold ? |
Date: | 2024-05-01 18:19:03 |
Message-ID: | 6B3881B9-29C4-4649-BEB7-0782C9595CBB@amazon.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
I've been following this discussion and would like to add my
2 cents.
> Unless I'm missing something major, that's completely bonkers. It
> might be true that it would be a good idea to vacuum such a table more
> often than we do at present, but there's no shot that we want to do it
> that much more often.
This is really an important point.
Too small of a threshold and a/v will constantly be vacuuming a fairly large
and busy table with many indexes.
If the threshold is large, say 100 or 200 million, I question if you want autovacuum
to be doing the work of cleanup here? That long of a period without a autovacuum
on a table means there maybe something misconfigured in your autovacuum settings.
At that point aren't you just better off performing a manual vacuum and
taking advantage of parallel index scans?
Regards,
Sami Imseih
Amazon Web Services (AWS)
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Haas | 2024-05-01 18:50:57 | Re: New GUC autovacuum_max_threshold ? |
Previous Message | Alvaro Herrera | 2024-05-01 17:49:35 | Re: cataloguing NOT NULL constraints |