From: | Michael Paquier <michael(at)paquier(dot)xyz> |
---|---|
To: | Bharath Rupireddy <bharath(dot)rupireddyforpostgres(at)gmail(dot)com> |
Cc: | Nathan Bossart <nathandbossart(at)gmail(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
Subject: | Re: WAL Insertion Lock Improvements |
Date: | 2023-05-10 23:31:06 |
Message-ID: | ZFwpOqunSz6wHMR7@paquier.xyz |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Wed, May 10, 2023 at 10:40:20PM +0530, Bharath Rupireddy wrote:
> test-case 2: -T900, WAL ~256 bytes - ran for about 3.5 hours and the
> more than 3X improvement in TPS is seen - 3.11X @ 512 3.79 @ 768, 3.47
> @ 1024, 2.27 @ 2048, 2.77 @ 4096
>
> [...]
>
> test-case 2: -t1000000, WAL ~256 bytes - ran for more than 12 hours
> and the maximum improvement is 1.84X @ 1024 client.
Thanks. So that's pretty close to what I was seeing when it comes to
this message size where you see much more effects under a number of
clients of at least 512~. Any of these tests have been using fsync =
on, I assume. I think that disabling fsync or just mounting pg_wal to
a tmpfs should show the same pattern for larger record sizes (after 1k
of message size the curve begins to go down with 512~ clients).
--
Michael
From | Date | Subject | |
---|---|---|---|
Next Message | Michael Paquier | 2023-05-10 23:34:29 | Re: [PATCH] Add native windows on arm64 support |
Previous Message | David Rowley | 2023-05-10 22:27:28 | Re: benchmark results comparing versions 15.2 and 16 |