From: | Tomas Vondra <tomas(at)vondra(dot)me> |
---|---|
To: | Robert Treat <rob(at)xzilla(dot)net> |
Cc: | Michael Paquier <michael(at)paquier(dot)xyz>, wenhui qiu <qiuwenhuifx(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: PoC: history of recent vacuum/checkpoint runs (using new hooks) |
Date: | 2024-12-29 18:25:05 |
Message-ID: | ed880676-2001-4ec5-b39d-e5b113f3a488@vondra.me |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 12/29/24 16:39, Robert Treat wrote:
> On Fri, Dec 27, 2024 at 8:25 PM Tomas Vondra <tomas(at)vondra(dot)me> wrote:
>> On 12/27/24 05:00, Michael Paquier wrote:
>>> On Thu, Dec 26, 2024 at 06:58:11PM +0100, Tomas Vondra wrote:
>>>> If 128MB is insufficient, why would 256MB be OK? A factor of 2x does not
>>>> make a fundamental difference ...
>>>>
>>>> Anyway, the 128MB value is rather arbitrary. I don't mind increasing the
>>>> limit, or possibly removing it entirely (and accepting anything the
>>>> system can handle).
>>>
>>> + DefineCustomIntVariable("stats_history.size",
>>> + "Sets the amount of memory available for past events.",
>>>
>>> How about some time-based retention? Data size can be hard to think
>>> about for the end user, while there would be a set of users that would
>>> want to retain data for the past week, month, etc? If both size and
>>> time upper-bound are define, then entries that match one condition or
>>> the other are removed.
>>>
>>
>> Right. In my response [1] I suggested replacing the simple memory limit
>> with a time-based limit, but I haven't done anything about that yet.
>>
>> And the more I think about it the more I'm convinced we don't need to
>> keep the data about past runs in memory, a file should be enough (except
>> maybe for a small buffer). That would mean we don't need to worry about
>> dynamic shared memory, etc. I initially rejected this because it seemed
>> like a regression to how pgstat worked initially (sharing data through
>> files), but I don't think that's actually true - this data is different
>> (almost append-only), etc.
>>
>> The one case when we may need co read the data is in response to DROP of
>> a table, when we need to discard entries for that object. Or we could
>> handle that by recording OIDs of dropped objects ... not sure how
>> complex this would need to be.
>>
>> We'd still want to enforce some limits, of course - a time-based limit
>> of the minimum amount of time to cover, and maximum amount of disk space
>> to use (more as a safety).
>>
>> FWIW there's one "issue" with enforcing the time-based limit - we can
>> only do that for the "end time", because that's when we see the entry.
>> If you configure the limit to keep "1 day" history, and then a vacuum
>> completes after running for 2 days, we want to keep it, so that anyone
>> can actually see it.
>>
>
> I can't say I recall all the reasoning involved in making
> pg_stat_statements just be based on a fixed number of entries, but the
> ability to come up with corner cases was certainly a factor. For
> example, imagine the scenario where you set a max at 30 days, but you
> have some tables only being vacuumed every few months. Ideally you
> probably want the last entry no matter what, and honestly probably the
> last 2 (in case you are troubleshooting something, having the last run
> and something to compare against is ideal). In any case, it can get
> complicated pretty quickly.
>
I really don't want to get into such complicated retention policies,
because there's no "right" solution, and it adds complexity both to
implementation (e.g. we'd need to keep per-relation counters for all the
various events), and processing (How would you know if there were no
other vacuum runs on a relation or if those other events were removed?)
I think the best solution to use a trivial retention policy (e.g. keep
everything for X days), and if you want to keep a longer history, you
need to read and archive that regularly (and the retention period
ensures you don't miss any events).
After all, that's what we assume for most other runtime stats anyway -
it's not very useful to know the current values, if you can't calculate
deltas. So you have to sample the stats.
There's also the trouble that this is not crash safe, so I'd hesitate to
rely on this very much if it can disappear.
> ...
>
> At the risk of increasing scope, since you already are working on
> checkpoints along with vacuums, I'm curious if there was a reason not
> to do analyze stats retention as well? It seems pretty correlated in
> the same area/problems as vacuum history.
>
I agree tracking ANALYZE would be useful. I ignored it mostly to keep
the scope as limited as possible, so two separate events were enough.
Adding another hook to do_analyze_rel() would be fairly trivial, but
I'll leave that for later.
regards
--
Tomas Vondra
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2024-12-29 18:56:55 | Re: Converting contrib SQL functions to new style |
Previous Message | Robert Treat | 2024-12-29 15:39:13 | Re: PoC: history of recent vacuum/checkpoint runs (using new hooks) |