From: | Frédéric Yhuel <frederic(dot)yhuel(at)dalibo(dot)com> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com>, "Imseih (AWS), Sami" <simseih(at)amazon(dot)com> |
Cc: | Joe Conway <mail(at)joeconway(dot)com>, Michael Banck <mbanck(at)gmx(dot)net>, Laurenz Albe <laurenz(dot)albe(at)cybertec(dot)at>, Nathan Bossart <nathandbossart(at)gmail(dot)com>, Melanie Plageman <melanieplageman(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>, David Rowley <dgrowleyml(at)gmail(dot)com> |
Subject: | Re: New GUC autovacuum_max_threshold ? |
Date: | 2024-05-02 06:44:26 |
Message-ID: | afffa064-53ff-43f4-93ec-34dba8c1f9ba@dalibo.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Le 01/05/2024 à 20:50, Robert Haas a écrit :
> Possibly what we need here is
> something other than a cap, where, say, we vacuum a 10GB table twice
> as often as now, a 100GB table four times as often, and a 1TB table
> eight times as often. Or whatever the right answer is.
IMO, it would make more sense. So maybe something like this:
vacthresh = Min(vac_base_thresh + vac_scale_factor * reltuples,
vac_base_thresh + vac_scale_factor * sqrt(reltuples) * 1000);
(it could work to compute a score, too, like in David's proposal)
From | Date | Subject | |
---|---|---|---|
Next Message | Michael Paquier | 2024-05-02 07:27:12 | Re: Weird test mixup |
Previous Message | Peter Eisentraut | 2024-05-02 06:35:56 | Re: small documentation fixes related to collations/ICU |