From: | Laurenz Albe <laurenz(dot)albe(at)cybertec(dot)at> |
---|---|
To: | Frédéric Yhuel <frederic(dot)yhuel(at)dalibo(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, Nathan Bossart <nathandbossart(at)gmail(dot)com> |
Cc: | Melanie Plageman <melanieplageman(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>, David Rowley <dgrowleyml(at)gmail(dot)com> |
Subject: | Re: New GUC autovacuum_max_threshold ? |
Date: | 2024-04-26 08:18:00 |
Message-ID: | d1af9a71e17f117e514d25bba2306ee845a9fde5.camel@cybertec.at |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Fri, 2024-04-26 at 09:35 +0200, Frédéric Yhuel wrote:
>
> Le 26/04/2024 à 04:24, Laurenz Albe a écrit :
> > On Thu, 2024-04-25 at 14:33 -0400, Robert Haas wrote:
> > > I believe that the underlying problem here can be summarized in this
> > > way: just because I'm OK with 2MB of bloat in my 10MB table doesn't
> > > mean that I'm OK with 2TB of bloat in my 10TB table. One reason for
> > > this is simply that I can afford to waste 2MB much more easily than I
> > > can afford to waste 2TB -- and that applies both on disk and in
> > > memory.
> >
> > I don't find that convincing. Why are 2TB of wasted space in a 10TB
> > table worse than 2TB of wasted space in 100 tables of 100GB each?
>
> Good point, but another way of summarizing the problem would be that the
> autovacuum_*_scale_factor parameters work well as long as we have a more
> or less evenly distributed access pattern in the table.
>
> Suppose my very large table gets updated only for its 1% most recent
> rows. We probably want to decrease autovacuum_analyze_scale_factor and
> autovacuum_vacuum_scale_factor for this one.
>
> Partitioning would be a good solution, but IMHO postgres should be able
> to handle this case anyway, ideally without per-table configuration.
I agree that you may well want autovacuum and autoanalyze treat your large
table differently from your small tables.
But I am reluctant to accept even more autovacuum GUCs. It's not like
we don't have enough of them, rather the opposite. You can slap on more
GUCs to treat more special cases, but we will never reach the goal of
having a default that will make everybody happy.
I believe that the defaults should work well in moderately sized databases
with moderate usage characteristics. If you have large tables or a high
number of transactions per second, you can be expected to make the effort
and adjust the settings for your case. Adding more GUCs makes life *harder*
for the users who are trying to understand and configure how autovacuum works.
Yours,
Laurenz Albe
From | Date | Subject | |
---|---|---|---|
Next Message | Michael Banck | 2024-04-26 08:43:44 | Re: New GUC autovacuum_max_threshold ? |
Previous Message | Frédéric Yhuel | 2024-04-26 08:10:20 | Re: New GUC autovacuum_max_threshold ? |