From: | Melanie Plageman <melanieplageman(at)gmail(dot)com> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com> |
Cc: | Robert Treat <rob(at)xzilla(dot)net>, Marcos Pegoraro <marcos(at)f10(dot)com(dot)br>, Alena Rybakina <a(dot)rybakina(at)postgrespro(dot)ru>, Andres Freund <andres(at)anarazel(dot)de>, Nazir Bilal Yavuz <byavuz81(at)gmail(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>, Peter Geoghegan <pg(at)bowt(dot)ie> |
Subject: | Re: Eagerly scan all-visible pages to amortize aggressive vacuum |
Date: | 2025-02-03 17:19:32 |
Message-ID: | CAAKRu_beDfQpAraU2CFrtuwWpUF2Uzr-TBqjOr54cVBp=7OSsQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Wed, Jan 29, 2025 at 11:34 AM Melanie Plageman
<melanieplageman(at)gmail(dot)com> wrote:
>
> Next I plan to run the hottail delete benchmark with default settings
> (including FPIs) with master and with the patch for about 24 hours
> each. I'm hoping the long duration will smooth out some of the run
> variance even with FPIs.
I've done this as well as a few other benchmarks of varying durations
Hot Tail Delete before Aggressive Vacuum:
I ran a 20 hour version of the hot tail benchmark that deletes all
data before it is aggressively vacuumed. The performance is a bit
better with the patch applied. This differed from the runs of the hot
tail delete benchmark that I ran with FPIs disabled. For this
benchmark (20 hours with FPIs enabled), I saw a decrease in vacuum I/O
time with the patch applied. I saw a similar change in client backend
bulkread evictions (the DELETE does bulkreads) and client backend
writes. This could be run variance due to checkpoint timing or even
benchmark run order interacting with SSD sustained write performance
issues.
Overall, I think this long-running hot tail delete benchmark shows
that the patch will not have surprising negative behavior for this
type of workload.
TPCB-like Gaussian Update Distribution:
I also ran a 22 hour version of the built-in tpcb-like benchmark with
a gaussian update distribution. I did see a large decrease in I/O done
by vacuum. However, the gaussian tpcb-like benchmark spends much less
time in vacuum overall, so the results, while positive, are not
dramatically different when considering overall performance of the
workload. The vacuum I/O decrease seems mainly to be reads of
pgbench_history -- which were much higher on master, likely due to
aggressive vacuums reading pages that had already been evicted from
shared buffers and, potentially, the kernel buffer cache.
I attached a chart from the 22 hour gaussian tpcb-like benchmark of
the read and write time increases corresponding with aggressive
vacuums of pgbench_history.
I was re-running some of the shorter benchmarks as a quick soundness
check on this version of the patch and attached a chart from the
append-only benchmark.
Similar to the gaussian tpcb-like workload, you can see that when the
aggressive vacuum triggers on master, the vacuum read and write times
jumps. This is pretty consistent with what I see on all of the
benchmarks.
In these two examples, the overall time spent in vacuum I/O isn't very
high, so the performance improvement may not be noticeable. However,
it seems like the patch has an overall smoothing effect on vacuum's
performance.
- Melanie
Attachment | Content-Type | Size |
---|---|---|
gaussian_tpcb_22hr.png | image/png | 329.3 KB |
append-only.png | image/png | 273.8 KB |
From | Date | Subject | |
---|---|---|---|
Next Message | Alex Friedman | 2025-02-03 17:29:33 | Re: Doc fix of aggressive vacuum threshold for multixact members storage |
Previous Message | Sami Imseih | 2025-02-03 16:59:07 | Re: Doc fix of aggressive vacuum threshold for multixact members storage |