From: | Ilia Evdokimov <ilya(dot)evdokimov(at)tantorlabs(dot)com> |
---|---|
To: | pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Cc: | Greg Sabino Mullane <htamfids(at)gmail(dot)com>, "Andrey M(dot) Borodin" <x4mmm(at)yandex-team(dot)ru>, Alexander Korotkov <aekorotkov(at)gmail(dot)com>, Michael Paquier <michael(at)paquier(dot)xyz> |
Subject: | Re: Sample rate added to pg_stat_statements |
Date: | 2025-01-09 21:16:17 |
Message-ID: | 1b13d748-5e98-479c-9222-3253a734a038@tantorlabs.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 22.11.2024 09:08, Alexander Korotkov wrote:
> On Wed, Nov 20, 2024 at 12:07 AM Michael Paquier <michael(at)paquier(dot)xyz> wrote:
>> On Tue, Nov 19, 2024 at 09:39:21AM -0500, Greg Sabino Mullane wrote:
>>> Oh, and a +1 in general to the patch, OP, although it would also be nice to
>>> start finding the bottlenecks that cause such performance issues.
>> FWIW, I'm not eager to integrate this proposal without looking at this
>> exact argument in depth.
>>
>> One piece of it would be to see how much of such "bottlenecks" we
>> would be able to get rid of by integrating pg_stat_statements into
>> the central pgstats with the custom APIs, without pushing the module
>> into core. This means that we would combine the existing hash of pgss
>> to shrink to 8 bytes for objid rather than 13 bytes now as the current
>> code relies on (toplevel, userid, queryid) for the entry lookup (entry
>> removal is sniped with these three values as well, or dshash seq
>> scans). The odds of conflicts still still play in our favor even if
>> we have a few million entries, or even ten times that.
> If you run "pgbench -S -M prepared" on a pretty large machine with
> high concurrency, then spin lock in pgss_store() could become pretty
> much of a bottleneck. And I'm not sure switching all counters to
> atomics could somehow improve the situation given we already have
> pretty many counters.
>
> I'm generally +1 for the approach taken in this thread. But I would
> suggest introducing a threshold value for a query execution time, and
> sample just everything below that threshold. The slower query
> shouldn't be sampled, because it can't be too frequent, and also it
> could be more valuable to be counter individually (while very fast
> queries probably only matter "in average").
>
> ------
> Regards,
> Alexander Korotkov
> Supabase
>
>
BTW, since we're performing sampling outside of
pgss_post_parse_analyze(), I'd like to reconsider Alexander's
proposal—to sample only those queries that have an execution time below
a specified threshold, as defined in the hook where execution time is
recorded. Queries exceeding this threshold would not be sampled.
Certainly I'll fix all Andrey's comments.
What are your thoughts on this approach?
--
Best regards,
Ilia Evdokimov,
Tantor Labs LLC.
From | Date | Subject | |
---|---|---|---|
Next Message | Matthias van de Meent | 2025-01-09 21:35:17 | Re: Why doesn't GiST VACUUM require a super-exclusive lock, like nbtree VACUUM? |
Previous Message | Ilia Evdokimov | 2025-01-09 21:05:33 | Re: Sample rate added to pg_stat_statements |