From: | Nathan Bossart <nathandbossart(at)gmail(dot)com> |
---|---|
To: | Michael Paquier <michael(at)paquier(dot)xyz> |
Cc: | Robert Haas <robertmhaas(at)gmail(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Thomas Munro <thomas(dot)munro(at)gmail(dot)com>, Fujii Masao <fujii(at)postgresql(dot)org>, Postgres hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
Subject: | Re: Weird failure with latches in curculio on v15 |
Date: | 2023-02-09 00:24:13 |
Message-ID: | 20230209002413.GA603595@nathanxps13 |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Thu, Feb 09, 2023 at 08:56:24AM +0900, Michael Paquier wrote:
> On Wed, Feb 08, 2023 at 02:25:54PM -0800, Nathan Bossart wrote:
>> These are all good points. Perhaps there could be a base archiver
>> implementation that shell_archive uses (and that other modules could use if
>> desired, which might be important for backward compatibility with the
>> existing callbacks). But if you want to do something fancier than
>> archiving sequentially, you could write your own.
>
> Which is basically the kind of things you can already achieve with a
> background worker and a module of your own?
IMO one of the big pieces that's missing is a way to get the next N files
to archive. Right now, you'd have to trawl through archive_status on your
own if you wanted to batch/parallelize. I think one advantage of what
Robert is suggesting is that we could easily provide a supported way to get
the next set of files to archive, and we can asynchronously mark them
"done". Otherwise, each module has to implement this.
--
Nathan Bossart
Amazon Web Services: https://aws.amazon.com
From | Date | Subject | |
---|---|---|---|
Next Message | Andres Freund | 2023-02-09 00:29:35 | Re: tests against running server occasionally fail, postgres_fdw & tenk1 |
Previous Message | Michael Paquier | 2023-02-09 00:07:49 | Re: OpenSSL 3.0.0 vs old branches |