From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Nathan Bossart <nathandbossart(at)gmail(dot)com> |
Cc: | Melanie Plageman <melanieplageman(at)gmail(dot)com>, Frédéric Yhuel <frederic(dot)yhuel(at)dalibo(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>, David Rowley <dgrowleyml(at)gmail(dot)com> |
Subject: | Re: New GUC autovacuum_max_threshold ? |
Date: | 2024-04-25 18:33:05 |
Message-ID: | CA+TgmoY4BENJyYcnU2eLFYZ73MZOb72WymNBH7vug6DtA+CZZw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Wed, Apr 24, 2024 at 3:57 PM Nathan Bossart <nathandbossart(at)gmail(dot)com> wrote:
> Yeah, I'm having trouble following the proposed mechanics for this new GUC,
> and it's difficult to understand how users would choose a value. If we
> just want to cap the number of tuples required before autovacuum takes
> action, perhaps we could simplify it to something like
>
> vacthresh = (float4) vac_base_thresh + vac_scale_factor * reltuples;
> vacthresh = Min(vacthres, vac_max_thresh);
>
> This would effectively cause autovacuum_vacuum_scale_factor to be
> overridden for large tables where the scale factor would otherwise cause
> the calculated threshold to be extremely high.
+1 for this. It seems a lot easier to understand than the original
proposal. And in fact, when I was working on my 2024.pgconf.dev
presentation, I suggested exactly this idea on one of my slides.
I believe that the underlying problem here can be summarized in this
way: just because I'm OK with 2MB of bloat in my 10MB table doesn't
mean that I'm OK with 2TB of bloat in my 10TB table. One reason for
this is simply that I can afford to waste 2MB much more easily than I
can afford to waste 2TB -- and that applies both on disk and in
memory. Another reason, at least in existing releases, is that at some
point index vacuuming hits a wall because we run out of space for dead
tuples. We *most definitely* want to do index vacuuming before we get
to the point where we're going to have to do multiple cycles of index
vacuuming. That latter problem should be fixed in v17 by the recent
dead TID storage changes. But even so, you generally want to contain
bloat before too many pages get added to your tables or indexes,
because you can't easily get rid of them again afterward, so I think
there's still a good case for preventing autovacuum from scaling the
threshold out to infinity.
What does surprise me is that Frédéric suggests a default value of
500,000. If half a million tuples (proposed default) is 20% of your
table (default value of autovacuum_vacuum_scale_factor) then your
table has 2.5 million tuples. Unless those tuples are very wide, that
table isn't even 1GB in size. I'm not aware that there's any problem
at all with the current formula on a table of that size, or even ten
times that size. I think you need to have tables that are hundreds of
gigabytes in size at least before this starts to become a serious
problem. Looking at this from another angle, in existing releases, the
maximum usable amount of autovacuum_work_mem is 1GB, which means we
can store one-sixth of a billion dead TIDs, or roughly 166 million.
And that limit has been a source of occasional complaints for years.
So we have those complaints on the one hand, suggesting that 166
million is not enough, and then we have this proposal, saying that
more than half a million is too much. That's really strange; my
initial hunch is that the value should be 100-500x higher than what
Frédéric proposed.
I'm also sort of wondering how much the tuple width matters here. I'm
not quite sure.
--
Robert Haas
EDB: http://www.enterprisedb.com
From | Date | Subject | |
---|---|---|---|
Next Message | Nathan Bossart | 2024-04-25 19:21:31 | Re: New GUC autovacuum_max_threshold ? |
Previous Message | Anton Voloshin | 2024-04-25 18:22:49 | Re: pgsql: psql: add an optional execution-count limit to \watch. |