From: | Bertrand Drouvot <bertranddrouvot(dot)pg(at)gmail(dot)com> |
---|---|
To: | Michael Paquier <michael(at)paquier(dot)xyz> |
Cc: | pgsql-hackers(at)lists(dot)postgresql(dot)org |
Subject: | Re: [BUG]: the walsender does not update its IO statistics until it exits |
Date: | 2025-03-03 11:54:39 |
Message-ID: | Z8WYf1jyy4MwOveQ@ip-10-97-1-34.eu-west-3.compute.internal |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Hi,
On Mon, Mar 03, 2025 at 10:51:19AM +0900, Michael Paquier wrote:
> On Fri, Feb 28, 2025 at 10:39:31AM +0000, Bertrand Drouvot wrote:
> > That sounds a good idea to measure the impact of those extra calls and see
> > if we'd need to mitigate the impacts. I'll collect some data.
So I did some tests using only one walsender (given the fact that the extra
lock you mentioned above is "only" for this particular backend).
=== Test with pg_receivewal
I was using one pg_receivewal process and did some tests that way:
pgbench -n -c8 -j8 -T60 -f <(echo "SELECT pg_logical_emit_message(true, 'test', repeat('0', 1));";)
I did not measure any noticeable extra lag (I did measure the time it took
for pg_size_pretty(sent_lsn - write_lsn) from pg_stat_replication to be back
to zero).
During the pgbench run a "perf record --call-graph fp -p <walsender_pid>" would
report (perf report -n):
1. pgstat_flush_backend() appears at about 3%
2. pg_memory_is_all_zeros() at about 2.8%
3. pgstat_flush_io() at about 0.4%
So it does not look like what we're adding here can be seen as a primary bottleneck.
That said it looks like that there is room for improvment in pgstat_flush_backend()
and that relying on a "have_iostats" like variable would be better than those
pg_memory_is_all_zeros() calls.
That's done in 0001 attached, by doing so, pgstat_flush_backend() now appears at
about 0.2%.
=== Test with pg_recvlogical
Now it does not look like pg_receivewal had a lot of IO stats to report (looking at
pg_stat_get_backend_io() output for the walsender).
Doing the same test with "pg_recvlogical -d postgres -S logical_slot -f /dev/null --start"
reports much more IO stats.
What I observe without the "have_iostats" optimization is:
1. I did not measure any noticeable extra lag
2. pgstat_flush_io() at about 5.5% (pgstat_io_flush_cb() at about 5.3%)
3 pgstat_flush_backend() at about 4.8%
and with the "have_iostats" optimization I now see pgstat_flush_backend() at
about 2.51%.
So it does not look like what we're adding here can be seen as a primary bottleneck
but that is probably worth implementing the "have_iostats" optimization attached.
Also, while I did not measure any noticeable extra lag, given the fact that
pgstat_flush_io() shows at about 5.5% and pgstat_flush_backend() at about 2.5%,
that could still make sense to reduce the frequency of the flush calls, thoughts?
Regards,
--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com
Attachment | Content-Type | Size |
---|---|---|
v3-0001-Flush-the-IO-statistics-of-active-walsenders.patch | text/x-diff | 2.9 KB |
v3-0002-Add-a-new-backend_has_iostats-global-variable.patch | text/x-diff | 2.6 KB |
From | Date | Subject | |
---|---|---|---|
Next Message | Álvaro Herrera | 2025-03-03 11:56:24 | Re: pg_stat_statements and "IN" conditions |
Previous Message | Alena Rybakina | 2025-03-03 11:17:18 | Re: Considering fractional paths in Append node |