From: | Thomas Munro <thomas(dot)munro(at)gmail(dot)com> |
---|---|
To: | Greg Stark <stark(at)mit(dot)edu> |
Cc: | PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Thinking about ANALYZE stats and autovacuum and large non-uniform tables |
Date: | 2021-10-21 21:42:23 |
Message-ID: | CA+hUKGJFA2Dtzn_Er7R27FM2ijewHZMJ_mmS_fiLhgrWwxF8sA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Fri, Oct 22, 2021 at 10:13 AM Greg Stark <stark(at)mit(dot)edu> wrote:
> Obviously this could get complex quickly. Perhaps it should be
> something users could declare. Some kind of "partitioned statistics"
> where you declare a where clause and we generate statistics for the
> table where that where clause is true. Then we could fairly easily
> also count things like n_mod_since_analyze for that where clause too.
It's a different thing, but somehow related and maybe worth
mentioning, that in DB2 you can declare a table to be VOLATILE. In
that case, by some unspecified different heuristics, it'll prefer
index scans over table scans, and it's intended to give stable
performance for queue-like tables by defending against automatically
scheduled stats being collected at a bad time. It's been a while
since I ran busy queue-like workloads on DB2 but I seem to recall it
was more about the dangers of tables that sometimes have say 10 rows
and something 42 million, rather than the case of 42 million DONE rows
and 0-10 PENDING rows, but not a million miles off.
From | Date | Subject | |
---|---|---|---|
Next Message | Zhihong Yu | 2021-10-21 21:43:38 | Re: Partial aggregates pushdown |
Previous Message | Greg Stark | 2021-10-21 21:12:38 | Thinking about ANALYZE stats and autovacuum and large non-uniform tables |