From: | Ilia Evdokimov <ilya(dot)evdokimov(at)tantorlabs(dot)com> |
---|---|
To: | Alexander Korotkov <aekorotkov(at)gmail(dot)com> |
Cc: | Greg Sabino Mullane <htamfids(at)gmail(dot)com>, "Andrey M(dot) Borodin" <x4mmm(at)yandex-team(dot)ru>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>, Michael Paquier <michael(at)paquier(dot)xyz> |
Subject: | Re: Sample rate added to pg_stat_statements |
Date: | 2024-12-02 11:06:30 |
Message-ID: | f202ed0f-28c3-45aa-884c-db0a78322342@tantorlabs.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 26.11.2024 01:15, Ilia Evdokimov wrote:
>
>
> On 22.11.2024 09:08, Alexander Korotkov wrote:
>> On Wed, Nov 20, 2024 at 12:07 AM Michael Paquier<michael(at)paquier(dot)xyz> wrote:
>>> On Tue, Nov 19, 2024 at 09:39:21AM -0500, Greg Sabino Mullane wrote:
>>>> Oh, and a +1 in general to the patch, OP, although it would also be nice to
>>>> start finding the bottlenecks that cause such performance issues.
>>> FWIW, I'm not eager to integrate this proposal without looking at this
>>> exact argument in depth.
>>>
>>> One piece of it would be to see how much of such "bottlenecks" we
>>> would be able to get rid of by integrating pg_stat_statements into
>>> the central pgstats with the custom APIs, without pushing the module
>>> into core. This means that we would combine the existing hash of pgss
>>> to shrink to 8 bytes for objid rather than 13 bytes now as the current
>>> code relies on (toplevel, userid, queryid) for the entry lookup (entry
>>> removal is sniped with these three values as well, or dshash seq
>>> scans). The odds of conflicts still still play in our favor even if
>>> we have a few million entries, or even ten times that.
>> If you run "pgbench -S -M prepared" on a pretty large machine with
>> high concurrency, then spin lock in pgss_store() could become pretty
>> much of a bottleneck. And I'm not sure switching all counters to
>> atomics could somehow improve the situation given we already have
>> pretty many counters.
>>
>> I'm generally +1 for the approach taken in this thread. But I would
>> suggest introducing a threshold value for a query execution time, and
>> sample just everything below that threshold. The slower query
>> shouldn't be sampled, because it can't be too frequent, and also it
>> could be more valuable to be counter individually (while very fast
>> queries probably only matter "in average").
>>
>> ------
>> Regards,
>> Alexander Korotkov
>> Supabase
>
> I really liked your idea, and I’d like to propose an enhancement that
> I believe improves it further.
>
> Yes, if a query’s execution time exceeds the threshold, it should
> always be tracked without sampling. However, for queries with
> execution times below the threshold, the sampling logic should
> prioritize shorter queries over those closer to the threshold. In my
> view, the ideal approach is for shorter queries to have the highest
> probability of being sampled, while queries closer to the threshold
> are less likely to be sampled.
>
> This behavior can be achieved with the following logic:
>
> pg_stat_statements.sample_exectime_threshold * random(0, 1) < total_time
>
> Here’s how it works:
>
> * As a query’s execution time approaches zero, the probability of it
> being sampled approaches one.
> * Conversely, as a query’s execution time approaches the threshold,
> the probability of it being sampled approaches zero.
>
> In other words, the sampling probability decreases linearly from 1 to
> 0 as the execution time gets closer to the threshold.
>
> I believe this approach offers an ideal user experience. I have
> attached a new patch implementing this logic. Please let me know if
> you have any feedback regarding the comments in the code, the naming
> of variables or documentation. I’m always open to discussion.
>
> --
> Best regards,
> Ilia Evdokimov,
> Tantor Labs LLC.
>
I’ve been closely reviewing my last (v5-*.patch) patch on implementing
time-based sampling, and I’ve realized that it might not be the best
approach. Let me explain the reasons.
* We can only perform sampling before the 'pgss_planner()' function.
However, at that point, we don’t yet know the query's execution time
since it only becomes available during 'pgss_ExecutorEnd()' or
'pgss_ProcessUtility()';
* If we wait to sample until the execution completes and we have the
actual execution time, this introduces a problem. By that point, we
might have already recorded the query's statistics into shared
memory from the 'pgss_planner()' making it too late to decide
whether to sample the query;
* Delaying sampling until the execution finishes would require waiting
for the execution time, which could introduce performance overhead.
This defeats the purpose of sampling, which aims to reduce the cost
of tracking query.
I believe we should reconsider the approach and revert to sampling based
on v4-*.patch. If I’ve missed anything or there’s an alternative way to
implement time threshold-based sampling efficiently, I’d be grateful to
hear your insights.
--
Best regards,
Ilia Evdokimov,
Tantor Labs LLC.
From | Date | Subject | |
---|---|---|---|
Next Message | Peter Eisentraut | 2024-12-02 11:13:56 | Re: Proper object locking for GRANT/REVOKE |
Previous Message | Peter Eisentraut | 2024-12-02 10:57:05 | fixing tsearch locale support |