From: | Andres Freund <andres(at)anarazel(dot)de> |
---|---|
To: | Noah Misch <noah(at)leadboat(dot)com> |
Cc: | pgsql-hackers(at)postgresql(dot)org, Thomas Munro <thomas(dot)munro(at)gmail(dot)com>, Heikki Linnakangas <hlinnaka(at)iki(dot)fi>, Robert Haas <robertmhaas(at)gmail(dot)com>, Jakub Wartak <jakub(dot)wartak(at)enterprisedb(dot)com> |
Subject: | Re: AIO v2.5 |
Date: | 2025-03-11 23:55:35 |
Message-ID: | 5dzyoduxlvfg55oqtjyjehez5uoq6hnwgzor4kkybkfdgkj7ag@rbi4gsmzaczk |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Hi,
On 2025-03-11 12:41:08 -0700, Noah Misch wrote:
> On Mon, Sep 16, 2024 at 01:51:42PM -0400, Andres Freund wrote:
> > On 2024-09-16 07:43:49 -0700, Noah Misch wrote:
> > > For non-sync IO methods, I gather it's essential that a process other than the
> > > IO definer be scanning for incomplete IOs and completing them.
>
> > > Otherwise, deadlocks like this would happen:
> >
> > > backend1 locks blk1 for non-IO reasons
> > > backend2 locks blk2, starts AIO write
> > > backend1 waits for lock on blk2 for non-IO reasons
> > > backend2 waits for lock on blk1 for non-IO reasons
> > >
> > > If that's right, in worker mode, the IO worker resolves that deadlock. What
> > > resolves it under io_uring? Another process that happens to do
> > > pgaio_io_ref_wait() would dislodge things, but I didn't locate the code to
> > > make that happen systematically.
> >
> > Yea, it's code that I haven't forward ported yet. I think basically
> > LockBuffer[ForCleanup] ought to call pgaio_io_ref_wait() when it can't
> > immediately acquire the lock and if the buffer has IO going on.
>
> I'm not finding that code in v2.6. What function has it?
My local version now has it... Sorry, I was focusing on the earlier patches
until now.
What do we want to do for ConditionalLockBufferForCleanup() (I don't think
IsBufferCleanupOK() can matter)? I suspect we should also make it wait for
the IO. See below:
Not for 18, but for full write support, we'll also need logic to wait for IO
in LockBuffer(BUFFER_LOCK_EXCLUSIVE) and answer the same question as for
ConditionalLockBufferForCleanup() for ConditionalLockBuffer().
It's not an issue with the current level of write support in the stack of
patches. But with v1 AIO, which had support for a lot more ways of doing
asynchronous writes, it turned out that not handling it in
ConditionalLockBuffer() triggers an endless loop. This can be
kind-of-reproduced today by just making ConditionalLockBuffer() always return
false - triggers a hang in the regression tests:
spginsert() loops around spgdoinsert() until it succeeds. spgdoinsert() locks
the child page with ConditionalLockBuffer() and gives up if it can't.
That seems like rather bad code in spgist, because, even without AIO, it'll
busy-loop until the buffer is unlocked. Which could take a while, given that
it'll conflict even with a share locker and thus synchronous writes.
Even if we fixed spgist, it seems rather likely that there's other code that
wouldn't tolerate "spurious" failures. Which leads me to think that causing
the IO to complete is probably the safest bet. Triggering IO completion never
requires acquiring new locks that could participate in a deadlock, so it'd be
safe.
> > At this point I am not aware of anything significant left to do in the main
> > AIO commit, safe some of the questions below.
>
> That is a big milestone.
Indeed!
> > - We could reduce memory usage a tiny bit if we made the mapping between
> > pgproc and per-backend-aio-state more complicated, i.e. not just indexed by
> > ProcNumber. Right now IO workers have the per-backend AIO state, but don't
> > actually need it. I'm mildly inclined to think that the complexity isn't
> > worth it, but on the fence.
>
> The max memory savings, for 32 IO workers, is like the difference between
> max_connections=500 and max_connections=532, right?
Even less than that: Aux processes aren't always used as a multiplier in
places where max_connections etc are. E.g. max_locks_per_transaction is just
multiplied by MaxBackends, not MaxBackends+NUM_AUXILIARY_PROCS.
> If that's right, I wouldn't bother in the foreseeable future.
Cool.
> > - Three of the commits in the series really are just precursor commits to
> > their subsequent commits, which I found helpful for development and review,
> > namely:
> >
> > - aio: Basic subsystem initialization
> > - aio: Skeleton IO worker infrastructure
> > - aio: Add liburing dependency
> >
> > Not sure if it's worth keeping these separate or whether they should just be
> > merged with their "real commit".
>
> The split aided my review. It's trivial to turn an unmerged stack of commits
> into the merged equivalent, but unmerging is hard.
That's been the feedback so far, so I'll leave it split.
> > - Right now this series defines PGAIO_VERBOSE to 1. That's good for debugging,
> > but all the ereport()s add a noticeable amount of overhead at high IO
> > throughput (at multiple gigabytes/second), so that's probably not right
> > forever. I'd leave this on initially and then change it to default to off
> > later. I think that's ok?
>
> Sure. Perhaps make it depend on USE_ASSERT_CHECKING later?
Yea, that makes sense.
> > - To allow io_workers to be PGC_SIGHUP, and to eventually allow to
> > automatically in/decrease active workers, the max number of workers (32) is
> > always allocated. That means we use more semaphores than before. I think
> > that's ok, it's not 1995 anymore. Alternatively we can add a
> > "io_workers_max" GUC and probe for it in initdb.
>
> Let's start as you have it. If someone wants to make things perfect for
> non-root BSD users, they can add the GUC later. io_method=sync is a
> sufficient backup plan indefinitely.
Cool.
I think we'll really need to do something about this for BSD users regardless
of AIO. Or maybe those OSs should fix something, but somehow I am not having
high hopes for an OS that claims to have POSIX confirming unnamed semaphores
due to having a syscall that always returns EPERM... [1].
> > - pg_stat_aios currently has the IO Handle flags as dedicated columns. Not
> > sure that's great?
> >
> > They could be an enum array or such too? That'd perhaps be a bit more
> > extensible? OTOH, we don't currently use enums in the catalogs and arrays
> > are somewhat annoying to conjure up from C.
>
> An enum array does seem elegant and extensible, but it has the problems you
> say. (I would expect to lose time setting up pg_enum.oid values to not change
> between releases.) A possible compromise would be a text array like
> heap_tuple_infomask_flags() does. Overall, I'm not seeing a clear need to
> change away from the bool columns.
Yea, I think that's where I ended up too. If we get a dozen flags we can
reconsider.
> > Todo:
>
> > - Figure out how to deduplicate support for LockBufferForCleanup() in
> > TerminateBufferIO().
>
> Yes, I agree there's an opportunity for a WakePinCountWaiter() or similar
> subroutine.
Done.
> > - Check if documentation for track_io_timing needs to be adjusted, after the
> > bufmgr.c changes we only track waiting for an IO.
>
> Yes.
The relevant sentences seem to be:
- "Enables timing of database I/O calls."
s/calls/waits/
- "Time spent in {read,write,writeback,extend,fsync} operations"
s/in/waiting for/
Even though not all of these will use AIO, the "waiting for" formulation
seems just as accurate.
- "Columns tracking I/O time will only be non-zero when <xref
linkend="guc-track-io-timing"/> is enabled."
s/time/wait time/
> On Mon, Mar 10, 2025 at 02:23:12PM -0400, Andres Freund wrote:
> > Attached is v2.6 of the AIO patchset.
>
> > - 0005, 0006 - io_uring support - close, but we need to do something about
> > set_max_fds(), which errors out spuriously in some cases
>
> What do we know about those cases? I don't see a set_max_fds(); is that
> set_max_safe_fds(), or something else?
Sorry, yes, set_max_safe_fds(). The problem basically is that with io_uring we
will have a large number of FDs already allocated by the time
set_max_safe_fds() is called. set_max_safe_fds() subtracts already_open from
max_files_per_process allowing few, and even negative, IOs.
I think we should redefine max_files_per_process to be about the number of
files each *backend* will additionally open. Jelte was working on related
patches, see [2]
> > + * AIO handles need be registered in critical sections and therefore
> > + * cannot use the normal ResoureElem mechanism.
>
> s/ResoureElem/ResourceElem/
Oops, fixed.
> > + <varlistentry id="guc-io-method" xreflabel="io_method">
> > + <term><varname>io_method</varname> (<type>enum</type>)
> > + <indexterm>
> > + <primary><varname>io_method</varname> configuration parameter</primary>
> > + </indexterm>
> > + </term>
> > + <listitem>
> > + <para>
> > + Selects the method for executing asynchronous I/O.
> > + Possible values are:
> > + <itemizedlist>
> > + <listitem>
> > + <para>
> > + <literal>sync</literal> (execute asynchronous I/O synchronously)
>
> The part in parentheses reads like a contradiction to me.
There's something to that...
> How about phrasing it like one of these:
>
> (execute I/O synchronously, even I/O eligible for asynchronous execution)
> (execute asynchronous-eligible I/O synchronously)
> (execute I/O synchronously, even when asynchronous execution was feasible)
I like the second one best, adopted.
> [..]
> End sentence with question mark, probably.
> [..]
> s/strict/strictly/
> [..]
> I recommend adding "Always called in a critical section." since at least
> pgaio_worker_submit() subtly needs it.
> [..]
> s/that that/that/
> [..]
> s/smgr.,/smgr.c,/ or just "smgr"
> [..]
> s/locallbacks/local callbacks/
> [..]
> s/the the/the/
All adopted.
> > +PgAioHandle *
> > +pgaio_io_acquire_nb(struct ResourceOwnerData *resowner, PgAioReturn *ret)
> > +{
> > + if (pgaio_my_backend->num_staged_ios >= PGAIO_SUBMIT_BATCH_SIZE)
> > + {
> > + Assert(pgaio_my_backend->num_staged_ios == PGAIO_SUBMIT_BATCH_SIZE);
> > + pgaio_submit_staged();
>
> I'm seeing the "num_staged_ios >= PGAIO_SUBMIT_BATCH_SIZE" case uncovered in a
> check-world coverage report. I tried PGAIO_SUBMIT_BATCH_SIZE=2,
> io_max_concurrency=1, and io_max_concurrency=64. Do you already have a recipe
> for reaching this case?
With the default server settings it's hard to hit due to read_stream.c
limiting how much IO it issues:
1) The default io_combine_limit=16 makes reads larger, reducing the queue
depth, at least for sequential scans
2) The default shared_buffers/max_connections settings limit the number of
buffers that can be pinned to 86, which will only allow a small number of
IOs due to 86/io_combine_limit = ~5
3) The default effective_io_concurrency only allows one IO in flight
Melanie has a patch to adjust effective_io_concurrency:
https://www.postgresql.org/message-id/CAAKRu_Z4ekRbfTacYYVrvu9xRqS6G4DMbZSbN_1usaVtj%2Bbv2w%40mail.gmail.com
If I increase shared_buffers and decrease io_combine_limit and put an
elog(PANIC) in that branch, it's rather quickly hit.
> > +/*
> > + * Stage IO for execution and, if necessary, submit it immediately.
> > + *
> > + * Should only be called from pgaio_io_prep_*().
> > + */
> > +void
> > +pgaio_io_stage(PgAioHandle *ioh, PgAioOp op)
> > +{
>
> We've got closely-associated verbs "prepare", "prep", and "stage". README.md
> doesn't mention "stage". Can one of the following two changes happen?
>
> - README.md starts mentioning "stage" and how it differs from the others
> - Code stops using "stage"
I'll try to add something to README.md. To me the sequence is prepare->stage.
> > + * Batch submission mode needs to explicitly ended with
> > + * pgaio_exit_batchmode(), but it is allowed to throw errors, in which case
> > + * error recovery will end the batch.
>
> This sentence needs some grammar help, I think.
Indeed.
> Maybe use:
>
> * End batch submission mode with pgaio_exit_batchmode(). (Throwing errors is
> * allowed; error recovery will end the batch.)
I like it.
> > Size
> > AioShmemSize(void)
> > {
> > Size sz = 0;
> >
> > + /*
> > + * We prefer to report this value's source as PGC_S_DYNAMIC_DEFAULT.
> > + * However, if the DBA explicitly set wal_buffers = -1 in the config file,
>
> s/wal_buffers/io_max_concurrency/
Ooops.
> > +extern int io_workers;
>
> By the rule that GUC vars are PGDLLIMPORT, this should be PGDLLIMPORT.
Indeed. I wish we had something finding violations of this automatically...
> > +static void
> > +maybe_adjust_io_workers(void)
>
> This also restarts workers that exit, so perhaps name it
> start_io_workers_if_missing().
But it also stops IO workers if necessary?
> > +{
> ...
> > + /* Try to launch one. */
> > + child = StartChildProcess(B_IO_WORKER);
> > + if (child != NULL)
> > + {
> > + io_worker_children[id] = child;
> > + ++io_worker_count;
> > + }
> > + else
> > + break; /* XXX try again soon? */
>
> Can LaunchMissingBackgroundProcesses() become the sole caller of this
> function, replacing the current mix of callers? That would be more conducive
> to promptly doing the right thing after launch failure.
I'm not sure that'd be a good idea - right now IO workers are started before
the startup process, as the startup process might need to perform IO. If we
started it only later in ServerLoop() we'd potentially do a fair bit of work,
including starting checkpointer, bgwriter, bgworkers before we started IO
workers. That shouldn't actively break anything, but it would likely make
things slower.
I rather dislike the code around when we start what. Leaving AIO aside, during
a normal startup we start checkpointer, bgwriter before the startup
process. But during a crash restart we don't explicitly start them. Why make
things uniform when it coul also be exciting :)
> > --- a/src/backend/utils/init/miscinit.c
> > +++ b/src/backend/utils/init/miscinit.c
> > @@ -293,6 +293,9 @@ GetBackendTypeDesc(BackendType backendType)
> > case B_CHECKPOINTER:
> > backendDesc = gettext_noop("checkpointer");
> > break;
> > + case B_IO_WORKER:
> > + backendDesc = "io worker";
>
> Wrap in gettext_noop() like B_CHECKPOINTER does.
>
> > + Only has an effect if <xref linkend="guc-max-wal-senders"/> is set to
> > + <literal>worker</literal>.
>
> s/guc-max-wal-senders/guc-io-method/
>
> > + * of IOs, wakeups "fan out"; each woken IO worker can wake two more. qXXX
>
> s/qXXX/XXX/
All fixed.
> > + /*
> > + * It's very unlikely, but possible, that reopen fails. E.g. due
> > + * to memory allocations failing or file permissions changing or
> > + * such. In that case we need to fail the IO.
> > + *
> > + * There's not really a good errno we can report here.
> > + */
> > + error_errno = ENOENT;
>
> Agreed there's not a good errno, but let's use a fake errno that we're mighty
> unlikely to confuse with an actual case of libc returning that errno. Like
> one of EBADF or EOWNERDEAD.
Can we rely on that to be present on all platforms, including windows?
> > + for (int contextno = 0; contextno < TotalProcs; contextno++)
> > + {
> > + PgAioUringContext *context = &pgaio_uring_contexts[contextno];
> > + int ret;
> > +
> > + /*
> > + * XXX: Probably worth sharing the WQ between the different rings,
> > + * when supported by the kernel. Could also cause additional
> > + * contention, I guess?
> > + */
> > +#if 0
> > + if (!AcquireExternalFD())
> > + elog(ERROR, "No external FD available");
> > +#endif
>
> Probably remove the "#if 0" or add a comment on why it's here.
Will do. It was an attempt at dealing with the set_max_safe_fds() issue above,
but it turned out to not work at all, given how fd.c currently works.
> > + ret = io_uring_submit(uring_instance);
> > + pgstat_report_wait_end();
> > +
> > + if (ret == -EINTR)
> > + {
> > + pgaio_debug(DEBUG3,
> > + "aio method uring: submit EINTR, nios: %d",
> > + num_staged_ios);
> > + }
> > + else if (ret < 0)
> > + elog(PANIC, "failed: %d/%s",
> > + ret, strerror(-ret));
>
> I still think (see 2024-09-16 review) EAGAIN should do the documented
> recommendation instead of PANIC:
>
> EAGAIN The kernel was unable to allocate memory for the request, or
> otherwise ran out of resources to handle it. The application should wait for
> some completions and try again.
I don't think this can be hit in a recoverable way. We'd likely just end up
with an untested path that quite possibly would be wrong.
What wait time would be appropriate? What problems would it cause if we just
slept while holding critical lwlocks? I think it'd typically just delay the
crash-restart if we did, making it harder to recover from the problem.
Because we are careful to limit how many outstanding IO requests there are on
an io_uring instance, the kernel has to have run *severely* out of memory to
hit this.
I suspect it might currently be *impossible* to hit this due to ENOMEM,
because io_uring will fall back to allocating individual request, if the batch
allocation it normally does, fails. My understanding is that for small
allocations the kernel will try to reclaim memory forever, only large ones can
fail.
Even if it were possible to hit, the likelihood that postgres can continue to
work ok if the kernel can't allocate ~250 bytes seems very low.
How about adding a dedicated error message for EAGAIN? IMO io_uring_enter()'s
meaning of EAGAIN is, uhm, unconvential, so a better error message than
strerror() might be good?
Proposed comment:
/*
* The io_uring_enter() manpage suggests that the appropriate
* reaction to EAGAIN is:
*
* "The application should wait for some completions and try
* again"
*
* However, it seems unlikely that that would help in our case, as
* we apply a low limit to the number of outstanding IOs and thus
* also outstanding completions, making it unlikely that we'd get
* EAGAIN while the OS is in good working order.
*
* Additionally, it would be problematic to just wait here, our
* caller might hold critical locks. It'd possibly lead to
* delaying the crash-restart that seems likely to occur when the
* kernel is under such heavy memory pressure.
*/
> > + pgstat_report_wait_end();
> > +
> > + if (ret == -EINTR)
> > + {
> > + continue;
> > + }
> > + else if (ret != 0)
> > + {
> > + elog(PANIC, "unexpected: %d/%s: %m", ret, strerror(-ret));
>
> I think errno isn't meaningful here, so %m doesn't belong.
You're right. I wonder if we should make errno meaningful though (by setting
it), the elog.c machinery captures it and I know that there are logging hooks
that utilize that fact. That'd also avoid the need to use strerror() here.
> > --- a/doc/src/sgml/config.sgml
> > +++ b/doc/src/sgml/config.sgml
> > @@ -2687,6 +2687,12 @@ include_dir 'conf.d'
> > <literal>worker</literal> (execute asynchronous I/O using worker processes)
> > </para>
> > </listitem>
> > + <listitem>
> > + <para>
> > + <literal>io_uring</literal> (execute asynchronous I/O using
> > + io_uring, if available)
> > + </para>
> > + </listitem>
>
> Docs should eventually cover RLIMIT_MEMLOCK per
> https://github.com/axboe/liburing "ulimit settings".
The way we currently use io_uring (i.e. no registered buffers), the
RLIMIT_MEMLOCK advice only applies to linux <= 5.11. I'm not sure that's
worth documenting?
> Maybe RLIMIT_NOFILE, too.
Yea, we probably need to. Depends a bit on where we go with [2] though.
>
> > @@ -2498,6 +2529,12 @@ FilePathName(File file)
> > int
> > FileGetRawDesc(File file)
> > {
> > + int returnCode;
> > +
> > + returnCode = FileAccess(file);
> > + if (returnCode < 0)
> > + return returnCode;
> > +
> > Assert(FileIsValid(file));
> > return VfdCache[file].fd;
> > }
>
> What's the rationale for this function's change?
It flatly didn't work before. I guess I can make that a separate commit.
> > +The main reason to want to use Direct IO are:
>
> > +The main reason *not* to use Direct IO are:
>
> x2 s/main reason/main reasons/
>
> > + and direct IO without O_DSYNC needs to issue a write and after the writes
> > + completion a cache cache flush, whereas O\_DIRECT + O\_DSYNC can use a
>
> s/writes/write's/
>
> > + single FUA write).
>
> I recommend including the acronym expansion: s/FUA/Force Unit Access (FUA)/
>
> > +In an `EXEC_BACKEND` build backends executable code and other process local
>
> s/backends/backends'/
>
> > +state is not necessarily mapped to the same addresses in each process due to
> > +ASLR. This means that the shared memory cannot contain pointer to callbacks.
>
> s/pointer/pointers/
>
> > +The "solution" to this the ability to associate multiple completion callbacks
> > +with a handle. E.g. bufmgr.c can have a callback to update the BufferDesc
> > +state and to verify the page and md.c. another callback to check if the IO
> > +operation was successful.
>
> One of these or similar:
> s/md.c. another/md.c can have another/
> s/md.c. /md.c /
All applied.
> I've got one high-level question that I felt could take too long to answer for
> myself by code reading. What's the cleanup story if process A does
> elog(FATAL) with unfinished I/O? Specifically:
It's a good question. Luckily there's a relatively easy answer:
pgaio_shutdown() is registered via before_shmem_exit() in pgaio_init_backend()
and pgaio_shutdown() waits for all IOs to finish.
The main reason this exists is that the AIO mechanism in various OSs, at least
in some OS versions, don't like it if the issuing process exits while the IO
is in flight. IIRC that was the case with in v1 with posix_aio (which we
don't support in v2, and probably should never use) and I think also with
io_uring in some kernel versions.
Another reason is that those requests would show up in pg_aios (or whatever we
end up naming it) until they're reused, which doesn't seem great.
> - Suppose some other process B reuses the shared memory AIO data structures
> that pertained to process A. After that, some process C completes the I/O
> in shmem. Do we avoid confusing B by storing local callback data meant for
> A in shared memory now pertaining to B?
This will, before pgaio_shutdown() gets involved, also be prevented by local
callbacks being cleared by resowner cleanup. We take care that that resowner
cleanup happens before process exit. That's important, because the backend
local pointer could be invalidated by an ERROR
> - Thinking more about this README paragraph:
>
> +In addition to completion, AIO callbacks also are called to "prepare" an
> +IO. This is, e.g., used to increase buffer reference counts to account for the
> +AIO subsystem referencing the buffer, which is required to handle the case
> +where the issuing backend errors out and releases its own pins while the IO is
> +still ongoing.
>
> Which function performs that reference count increase? I'm not finding it
> today.
Ugh, I just renamed the relevant functions in my local branch, while trying to
reduce the code duplication between shared and local buffers ;).
In <= v2.6 it's shared_buffer_stage_common() and local_buffer_readv_stage().
In v2.7-to-be it is buffer_stage_common(), which now supports both shared and
local buffers.
> I wanted to look at how it ensures the issuing backend still exists as the
> function increases the reference count.
The reference count is increased solely in the BufferDesc, *not* in the
backend-local pin tracking. Earlier I had tracked the pin in BufferDesc for
shared buffers (as the pin needs to be released upon completion, which might
be in another backend), but in LocalRefCount[] for temp buffers. But that
turned out to not work when the backend errors out, as it would make
CheckForLocalBufferLeaks() complain.
>
> One later-patch item:
>
> > +static PgAioResult
> > +SharedBufferCompleteRead(int buf_off, Buffer buffer, uint8 flags, bool failed)
> > +{
> ...
> > + TRACE_POSTGRESQL_BUFFER_READ_DONE(tag.forkNum,
> > + tag.blockNum,
> > + tag.spcOid,
> > + tag.dbOid,
> > + tag.relNumber,
> > + INVALID_PROC_NUMBER,
> > + false);
>
> I wondered about whether the buffer-read-done probe should happen in the
> process that calls the complete_shared callback or in the process that did the
> buffer-read-start probe.
Yea, that's a good point. I should at least have added a comment pointing out
that it's a choice with pros and cons.
The reason I went for doing it in the completion callback is that it seemed
better to get the READ_DONE event as soon as possible, even if the issuer of
the IO is currently busy doing other things. The shared completion callback is
after all where the buffer state is updated for shared buffers.
But I think you have a point too.
> When I see dtrace examples, they usually involve explicitly naming each PID
> to trace
TBH, i've only ever used our tracepoints via perf and bpftrace, not dtrace
itself. For those it's easy to trace more than just a single pid and to
monitor system-wide. I don't really know enough about using dtrace itself.
> Assuming that's indeed the norm, I think the local callback would
> be the better place, so a given trace contains both probes.
Seems like a shame to add an extra indirect function call for a tracing
feature that afaict approximately nobody ever uses (IIRC we several times have
passed wrong things to tracepoints without that being noticed).
TBH, the tracepoints are so poorly documented and maintained that I was
tempted to suggest removing them a couple times.
This was an awesome review, thanks!
Andres Freund
[1] https://man.openbsd.org/sem_init.3#STANDARDS
[2] https://postgr.es/m/D80MHNSG4EET.6MSV5G9P130F%40jeltef.nl
From | Date | Subject | |
---|---|---|---|
Next Message | Andres Freund | 2025-03-12 00:07:38 | Re: BitmapHeapScan streaming read user and prelim refactoring |
Previous Message | Noah Misch | 2025-03-11 23:48:39 | Re: dblink query interruptibility |