From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Peter Geoghegan <pg(at)bowt(dot)ie> |
Cc: | Magnus Hagander <magnus(at)hagander(dot)net>, Patrik Novotny <panovotn(at)redhat(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: RFE: Make statistics robust for unplanned events |
Date: | 2021-04-22 22:35:41 |
Message-ID: | 1268658.1619130941@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Peter Geoghegan <pg(at)bowt(dot)ie> writes:
> We already *almost* pay the full cost of durably storing the
> information used by autovacuum.c's relation_needs_vacanalyze() to
> determine if a VACUUM is required -- we're only missing
> new_dead_tuples/tabentry->n_dead_tuples. Why not go one tiny baby step
> further to fix this issue?
Definitely worth thinking about, but I'm a little confused about how
you see this working. Those pg_class fields are updated by vacuum
(or analyze) itself. How could they usefully serve as input to
autovacuum's decisions?
> Admittedly, storing new_dead_tuples durably is not sufficient to allow
> ANALYZE to be launched on schedule when there is a hard crash. It is
> also insufficient to make sure that insert-driven autovacuums get
> launched on schedule.
I'm not that worried about the former case, but the latter seems
like kind of a problem.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Andres Freund | 2021-04-22 22:41:42 | Re: RFE: Make statistics robust for unplanned events |
Previous Message | Andres Freund | 2021-04-22 22:25:02 | Re: Do we work with LLVM 12 on s390x? |