From: | Julien Rouhaud <rjuju123(at)gmail(dot)com> |
---|---|
To: | 赵庭海(庭章) <zhaotinghai(dot)zth(at)alibaba-inc(dot)com> |
Cc: | pgsql-hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
Subject: | Re: Re:Limit length of queryies in pg_stat_statement extension |
Date: | 2025-01-19 04:49:48 |
Message-ID: | Z4yEbLzSwboKDr65@jrouhaud |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Sun, Jan 19, 2025 at 03:32:19AM +0800, 赵庭海(庭章) wrote:
>
> I'm sorry I left out some details earlier. I found that the garbage collect
> backend process was in the loop of gc_qtexts while for a long time. The main
> backtrace is below.
>
> ```
> #0 0x00007fc528d6aba0 in __write_nocancel () from /lib64/libc.so.6
> #1 0x00007fc528cf52f3 in _IO_new_file_write () from /lib64/libc.so.6
> #2 0x00007fc528cf5b90 in __GI__IO_file_xsputn () from /lib64/libc.so.6
> #3 0x00007fc528cea7e2 in fwrite () from /lib64/libc.so.6
> #4 0x00007fc529199dd5 in gc_qtexts () at pg_stat_statements.c:2380
> #5 pgss_store
> #6 0x00007fc52919a2b8 in pgss_post_parse_analyze (query=0x1e9aed8, pstate=0x178a220) at pg_stat_statements.c:900
> ```
>
> So I think the main reason for this long lock holding is that the I/O
> operation takes a long time because of these very long queries.
>
> In my production environment. pg_stat_statement.max is set to 1000. I found
> that when this problem occurred, gc took more than 20 seconds. If I limit the
> length of a single sql to 8k, it will only take 1.79 seconds.
As I mentioned earlier entry eviction that doesn't lead to query text gc is
also expensive, and is more frequent. Truncating the query text makes
pg_stat_statements almost useless for multiple usages (e.g. if you need to run
the query again for testing a hypothetical index or similar).
In your case almost 2s of total freeze still sounds like something that
wouldn't acceptable. Either you have some OLTP system and that's many order of
magnitude more than query, or it's some kind of OLAP and then the difference
between 2s and 10s is a very narrow window for it to be a game changer.
Why exactly do you have pg_stat_statements.max set to a value that low? Have
you tried to tune it and/or estimate how many different entries you would need
to store to avoid too frequent eviction? In general if your workload leads to
something that will never fit in a finite number of entries (use of temporary
tables is a perfect example), the overhead is so high, gc or not, that your
only option is to get rid of pg_stat_statements.
>
> > Isn't the pg_stat_statements_info.dealloc counter enough to figure out the root
> > issue?
>
> Only in my opinions, pg_stat_statements_info.dealloc doesn't reflect how long
> it takes for garbage collect. Earlier when I was checking the logs for
> abnormal periods, there is only some slow parse logging like below.
Having this counter continuously increasing tells you that you have will almost
certainly have lock contention. You should also see pg_stat_statements in the
wait events. And again the gc just makes things worse, eviction itself will
kill your performance almost as much.
From | Date | Subject | |
---|---|---|---|
Next Message | Sami Imseih | 2025-01-19 05:40:32 | Re: improve DEBUG1 logging of parallel workers for CREATE INDEX? |
Previous Message | Michael Paquier | 2025-01-19 01:10:05 | Re: Coccinelle for PostgreSQL development [1/N]: coccicheck.py |