From: | Stephen Frost <sfrost(at)snowman(dot)net> |
---|---|
To: | Julien Rouhaud <rjuju123(at)gmail(dot)com> |
Cc: | Kyotaro Horiguchi <horikyota(dot)ntt(at)gmail(dot)com>, michael(at)paquier(dot)xyz, tatsuro(dot)yamada(dot)tf(at)nttcom(dot)co(dot)jp, masao(dot)fujii(at)oss(dot)nttdata(dot)com, pgsql-hackers(at)lists(dot)postgresql(dot)org |
Subject: | Re: Duplicate history file? |
Date: | 2021-06-15 18:28:04 |
Message-ID: | 20210615182803.GB20766@tamriel.snowman.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Greetings,
* Julien Rouhaud (rjuju123(at)gmail(dot)com) wrote:
> On Tue, Jun 15, 2021 at 11:33:10AM -0400, Stephen Frost wrote:
> > The requirements are things which are learned over years and changes
> > over time. Trying to document them and keep up with them would be a
> > pretty serious project all on its own. There are external projects who
> > spend serious time and energy doing their best to provide the tooling
> > needed here and we should be promoting those, not trying to pretend like
> > this is a simple thing which anyone could write a short perl script to
> > accomplish.
>
> The fact that this is such a complex problem is the very reason why we should
> spend a lot of energy documenting the various requirements. Otherwise, how
> could anyone implement a valid program for that and how could anyone validate
> that a solution claiming to do its job actually does its job?
Reading the code.
> > Already tried doing it in perl. No, it's not simple and it's also
> > entirely vaporware today and implies that we're going to develop this
> > tool, improve it in the future as we realize it needs to be improved,
> > and maintain it as part of core forever. If we want to actually adopt
> > and pull in a backup tool to be part of core then we should talk about
> > things which actually exist, such as the various existing projects that
> > have been written to specifically work to address all the requirements
> > which are understood today, not say "well, we can just write a simple
> > perl script to do it" because it's not actually that simple.
>
> Adopting a full backup solution seems like a bit extreme. On the other hand,
> having some real core implementation of an archive_command for the most general
> use cases (local copy, distant copy over ssh...) could make sense. This would
> remove that burden for some, probably most, of the 3rd party backup tools, and
> would also ensure that the various requirements are properly documented since
> it would be the implementation reference.
Having a database platform that hasn't got a full backup solution is a
pretty awkward position to be in.
I'd like to see something a bit more specific than handwaving about how
core could provide something in this area which would remove the burden
from other tools. Would also be good to know who is going to write that
and maintain it.
Thanks,
Stephen
From | Date | Subject | |
---|---|---|---|
Next Message | Andrew Dunstan | 2021-06-15 18:43:51 | Re: unnesting multirange data types |
Previous Message | Peter Geoghegan | 2021-06-15 18:20:51 | Re: snapshot too old issues, first around wraparound and then more. |