AIO v2.0

From: Andres Freund <andres(at)anarazel(dot)de>
To: pgsql-hackers(at)postgresql(dot)org
Subject: AIO v2.0
Date: 2024-09-01 06:27:50
Message-ID: uvrtrknj4kdytuboidbhwclo4gxhswwcpgadptsjvjqcluzmah@brqs62irg4dt
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Hi,

It's been quite a while since the last version of the AIO patchset that I have
posted. Of course parts of the larger project have since gone upstream [1].

A lot of time since the last versions was spent understanding the performance
characteristics of using AIO with WAL and understanding some other odd
performance characteristics I didn't understand. I think I mostly understand
that now and what the design implications for an AIO subsystem are.

The prototype I had been working on unfortunately suffered from a few design
issues that weren't trivial to fix.

The biggest was that each backend could essentially have hard references to
unbounded numbers of "AIO handles" and that these references prevented these
handles from being reused. Because "AIO handles" have to live in shared memory
(so other backends can wait on them, that IO workers can perform them, etc)
that's obviously an issue. There was always a way to just run out of AIO
handles. I went through quite a few iterations of a design for how to resolve
that - I think I finally got there.

Another significant issue was that when I wrote the AIO prototype,
bufmgr.c/smgr.c/md.c only issued IOs in BLCKSZ increments, with the AIO
subsystem merging them into larger IOs. Thomas et al's work around streaming
read make bufmgr.c issue larger IOs - which is good for performance. But it
was surprisingly hard to fit into my older design.

It took me much longer than I had hoped to address these issues in
prototype. In the end I made progress by working on a rewriting the patchset
from scratch (well, with a bit of copy & paste).

The main reason I had previously implemented WAL AIO etc was to know the
design implications - but now that they're somewhat understood, I'm planning
to keep the patchset much smaller, with the goal of making it upstreamable.

While making v2 somewhat presentable I unfortunately found a few more design
issues - they're now mostly resolved, I think. But I only resolved the last
one a few hours ago, who knows what a few nights of sleeping on it will
bring. Unfortunately that prevented me from doing some of the polishing that I
had wanted to finish...

Because of the aforementioned move, I currently do not have access to my
workstation. I just have access to my laptop - which has enough thermal issues
to make benchmarks not particularly reliable.

So here are just a few teaser numbers, on an PCIe v4 NVMe SSD, note however
that this is with the BAS_BULKREAD size increased, with the default 256kB, we
can only keep one IO in flight at a time (due to io_combine_limit building
larger IOs) - we'll need to do something better than this, but that's yet
another separate discussion.

Workload: pg_prewarm('pgbench_accounts') of a scale 5k database, which is
bigger than memory:

time
master: 59.097
aio v2.0, worker: 11.211
aio v2.0, uring *: 19.991
aio v2.0, direct, worker: 09.617
aio v2.0, direct, uring *: 09.802

Workload: SELECT sum(abalance) FROM pgbench_accounts;

0 workers 1 worker 2 workers 4 workers
master: 65.753 33.246 21.095 12.918
aio v2.0, worker: 21.519 12.636 10.450 10.004
aio v2.0, uring*: 31.446 17.745 12.889 10.395
aio v2.0, uring** 23.497 13.824 10.881 10.589
aio v2.0, direct, worker: 22.377 11.989 09.915 09.772
aio v2.0, direct, uring*: 24.502 12.603 10.058 09.759

* the reason io_uring is slower is that workers effectively parallelize
*memcpy, at the cost of increased CPU usage
** a simple heuristic to use IOSQE_ASYNC to force some parallelism of memcpys

Workload: checkpointing ~20GB of dirty data, mostly sequential:

time
master: 10.209
aio v2.0, worker: 05.391
aio v2.0, uring: 04.593
aio v2.0, direct, worker: 07.745
aio v2.0, direct, uring: 03.351

To solve the issue with an unbounded number of AIO references there are few
changes compared to the prior approach:

1) Only one AIO handle can be "handed out" to a backend, without being
defined. Previously the process of getting an AIO handle wasn't super
lightweight, which made it appealing to cache AIO handles - which was one
part of the problem for running out of AIO handles.

2) Nothing in a backend can force a "defined" AIO handle (i.e. one that is a
valid operation) to stay around, it's always possible to execute the AIO
operation and then reuse the handle. This provides a forward guarantee, by
ensuring that completing AIOs can free up handles (previously they couldn't
be reused until the backend local reference was released).

3) Callbacks on AIOs are not allowed to error out anymore, unless it's ok to
take the server down.

4) Obviously some code needs to know the result of AIO operation and be able
to error out. To allow for that the issuer of an AIO can provide a pointer
to local memory that'll receive the result of an AIO, including details
about what kind of errors occurred (possible errors are e.g. a read failing
or a buffer's checksum validation failing).

In the next few days I'll add a bunch more documentation and comments as well
as some better perf numbers (assuming my workstation survived...).

Besides that, I am planning to introduce "io_method=sync", which will just
execute IO synchrously. Besides that being a good capability to have, it'll
also make it more sensible to split off worker mode support into its own
commit(s).

Greetings,

Andres Freund

[1] bulk relation extension, streaming read
[2] personal health challenges, family health challenges and now moving from
the US West Coast to the East Coast, ...

Attachment Content-Type Size
v2.0-0001-bufmgr-Return-early-in-ScheduleBufferTagForWrit.patch text/x-diff 1.0 KB
v2.0-0002-Allow-lwlocks-to-be-unowned.patch text/x-diff 4.5 KB
v2.0-0003-Use-aux-process-resource-owner-in-walsender.patch text/x-diff 4.7 KB
v2.0-0004-Ensure-a-resowner-exists-for-all-paths-that-may.patch text/x-diff 2.3 KB
v2.0-0005-bufmgr-smgr-Don-t-cross-segment-boundaries-in-S.patch text/x-diff 6.2 KB
v2.0-0006-aio-Add-liburing-dependency.patch text/x-diff 9.9 KB
v2.0-0007-aio-Basic-subsystem-initialization.patch text/x-diff 11.6 KB
v2.0-0008-aio-Skeleton-IO-worker-infrastructure.patch text/x-diff 21.3 KB
v2.0-0009-aio-Basic-AIO-implementation.patch text/x-diff 89.4 KB
v2.0-0010-aio-Implement-smgr-md.c-aio-methods.patch text/x-diff 22.6 KB
v2.0-0011-bufmgr-Implement-AIO-support.patch text/x-diff 20.9 KB
v2.0-0012-bufmgr-Use-aio-for-StartReadBuffers.patch text/x-diff 14.0 KB
v2.0-0013-aio-Very-WIP-read_stream.c-adjustments-for-real.patch text/x-diff 4.5 KB
v2.0-0014-aio-Add-IO-queue-helper.patch text/x-diff 7.2 KB
v2.0-0015-bufmgr-use-AIO-in-checkpointer-bgwriter.patch text/x-diff 31.2 KB
v2.0-0016-very-wip-test_aio-module.patch text/x-diff 37.3 KB
v2.0-0017-Temporary-Increase-BAS_BULKREAD-size.patch text/x-diff 1.3 KB

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message David Rowley 2024-09-01 09:31:18 Re: ANALYZE ONLY
Previous Message Tender Wang 2024-09-01 05:45:57 Re: not null constraints, again