From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Peter Geoghegan <pg(at)bowt(dot)ie> |
Cc: | Andres Freund <andres(at)anarazel(dot)de>, Jeff Davis <pgsql(at)j-davis(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
Subject: | Re: Decoupling antiwraparound autovacuum from special rules around auto cancellation |
Date: | 2023-01-18 19:02:05 |
Message-ID: | CA+TgmoYNAHJuejwEG++bWajR+UJsxrF1gr9OPQZoz3uz7rj1_A@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Wed, Jan 18, 2023 at 1:31 PM Peter Geoghegan <pg(at)bowt(dot)ie> wrote:
> pgstat_report_analyze() will totally override the
> tabentry->dead_tuples information that drives autovacuum.c, based on
> an estimate derived from a random sample -- which seems to me to be an
> approach that just doesn't have any sound theoretical basis.
Yikes. I think we don't have a choice but to have a method to correct
the information somehow, because AFAIK the statistics system is not
crash-safe. But that approach does seem to carry significant risk of
overwriting correct information with wrong information.
> On reflection, maybe you're right here. Maybe it's true that the
> bigger problem is just that the implementation is bad, even on its own
> terms -- since it's pretty bad! Hard to say at this point.
>
> Depends on how you define it, too. Statistically sampling is just not
> fit for purpose here. But is that a problem with
> autovacuum_vacuum_scale_factor? I may have said words that could
> reasonably be interpreted that way, but I'm not prepared to blame it
> on the underlying autovacuum_vacuum_scale_factor model now. It's
> fuzzy.
Yep. I think what we should try to evaluate is which number is
furthest from the truth. My guess is that the threshold is so high
relative to what a reasonable value would be that you can't get any
benefit out of making the dead tuple count more accurate. Like, if the
threshold is 100x too high, or something, then who cares how accurate
the dead tuples number is? It's going to be insufficient to trigger
vacuuming whether it's right or wrong. We should try substituting a
less-bogus threshold calculation and see what happens then. An
alternative theory is that the threshold is fine and we're only
failing to reach it because the dead tuple calculation is so
inaccurate. Maybe that's even true in some scenarios, but I bet that
it's never the issue when people have really big tables. The fact that
I'm OK with 10MB of bloat in my 100MB table doesn't mean I'm OK with
1TB of bloat in my 10TB table. Among other problems, I can't even
vacuum away that much bloat in one index pass, because autovacuum
can't use enough work memory for that. Also, the absolute space
wastage matters.
--
Robert Haas
EDB: http://www.enterprisedb.com
From | Date | Subject | |
---|---|---|---|
Next Message | Andrei Zubkov | 2023-01-18 19:04:56 | Re: [PATCH] Tracking statements entry timestamp in pg_stat_statements |
Previous Message | Karl O. Pinc | 2023-01-18 19:01:18 | Re: Doc: Rework contrib appendix -- informative titles, tweaked sentences |