From: | Julien Rouhaud <rjuju123(at)gmail(dot)com> |
---|---|
To: | Andrey Borodin <x4mmm(at)yandex-team(dot)ru> |
Cc: | "Bossart, Nathan" <bossartn(at)amazon(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: parallelizing the archiver |
Date: | 2021-09-10 06:11:57 |
Message-ID: | CAOBaU_a_xvFRKvEmQ-pnv1x-mrDHmAk6oWje+GsF8WDpK1Qhiw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Fri, Sep 10, 2021 at 2:03 PM Andrey Borodin <x4mmm(at)yandex-team(dot)ru> wrote:
>
> > 10 сент. 2021 г., в 10:52, Julien Rouhaud <rjuju123(at)gmail(dot)com> написал(а):
> >
> > Yes, but it also means that it's up to every single archiving tool to
> > implement a somewhat hackish parallel version of an archive_command,
> > hoping that core won't break it.
> I'm not proposing to remove existing archive_command. Just deprecate it one-WAL-per-call form.
Which is a big API beak.
> It's a very simplistic approach. If some GUC is set - archiver will just feed ready files to stdin of archive command. What fundamental design changes we need?
I'm talking about the commands themselves. Your suggestion is to
change archive_command to be able to spawn a daemon, and it looks like
a totally different approach. I'm not saying that having a daemon
based approach to take care of archiving is a bad idea, I'm saying
that trying to fit that with the current archive_command + some new
GUC looks like a bad idea.
From | Date | Subject | |
---|---|---|---|
Next Message | Andrey Borodin | 2021-09-10 06:29:22 | Re: parallelizing the archiver |
Previous Message | Andrey Borodin | 2021-09-10 06:03:46 | Re: parallelizing the archiver |