From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Stephen Frost <sfrost(at)snowman(dot)net> |
Cc: | PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
Subject: | Re: block-level incremental backup |
Date: | 2019-04-22 20:08:18 |
Message-ID: | CA+TgmoaSS6rK00wG+r5AqPi7Y0ephO2d9zqeeDYNvXPjOt1jZA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Mon, Apr 22, 2019 at 2:26 PM Stephen Frost <sfrost(at)snowman(dot)net> wrote:
> There was basically zero discussion about what things would look like at
> a protocol level (I went back and skimmed over the thread before sending
> my last email to specifically see if I was going to get this response
> back..). I get the idea behind the diff file, the contents of which I
> wasn't getting into above.
Well, I wrote:
"There should be a way to tell pg_basebackup to request from the
server only those blocks where LSN >= threshold_value."
I guess I assumed that people would interested in the details take
that to mean "and therefore the protocol would grow an option for this
type of request in whatever way is the most straightforward possible
extension of the current functionality is," which is indeed how you
eventually interpreted it when you said we could "extend BASE_BACKUP
is by adding LSN as an optional parameter."
I could have been more explicit, but sometimes people tell me that my
emails are too long.
> external tools to leverage that. It sounds like what you're suggesting
> now is that you're happy to implement the backend code, expose it in a
> way that works just for pg_basebackup, and that if someone else wants to
> add things to the protocol to make it easier for external tools to
> leverage, great.
Yep, that's more or less it, although I am potentially willing to do
some modest amount of that other work along the way. I just don't
want to prioritize it higher than getting the actual thing I want to
build built, which I think is a pretty fair position for me to take.
> All I can say is that that's basically how we ended up
> in the situation we're in today where pg_basebackup doesn't support
> parallel backup but a bunch of external tools do and they don't go
> through the backend to get there, even though they'd probably prefer to.
I certainly agree that core should try to do things in a way that is
useful to external tools when that can be done without undue effort,
but only if it can actually be done without undo effort. Let's see
whether that's the case here:
- Anastasia wants a command added that dumps out whatever the server
knows about what files have changed, which I already agreed was a
reasonable extension of my initial proposal.
- You said that for this to be useful to pgbackrest, it'd have to use
a whole different mechanism that includes commands to request
individual files and blocks within those files, which would be a
significant rewrite of pg_basebackup that you agreed is more closely
related to parallel backup than to the project under discussion on
this thread. And that even then pgbackrest probably wouldn't use it
because it also does server-side compression and encryption which are
not included in this proposal.
It seems to me that the first one falls into the category a reasonable
additional effort and the second one falls into the category of lots
of extra and unrelated work that wouldn't even get used.
> Thanks for sharing your thoughts on that, certainly having the backend
> able to be more intelligent about streaming files to avoid latency is
> good and possibly the best approach. Another alternative to reducing
> the latency would be to have a way for the client to request a set of
> files, but I don't know that it'd be better.
I don't know either. This is an area that needs more thought, I
think, although as discussed, it's more related to parallel backup
than $SUBJECT.
> I'm not really sure why the above is extremely inconvenient for
> third-party tools, beyond just that they've already been written to work
> with an assumption that the server-side of things isn't as intelligent
> as PG is.
Well, one thing you might want to do is have a tool that connects to
the server, enters backup mode, requests information on what blocks
have changed, copies those blocks via direct filesystem access, and
then exits backup mode. Such a tool would really benefit from a
START_BACKUP / SEND_FILE_LIST / SEND_FILE_CONTENTS / STOP_BACKUP
command language, because it would just skip ever issuing the
SEND_FILE_CONTENTS command in favor of doing that part of the work via
other means. On the other hand, a START_PARALLEL_BACKUP LSN '1/234'
command is useless to such a tool.
Contrariwise, a tool that has its own magic - perhaps based on
WAL-scanning or something like ptrack - to know which files currently
exist and which blocks are modified could use SEND_FILE_CONTENTS but
not SEND_FILE_LIST. And a filesystem-snapshot based technique might
use START_BACKUP and STOP_BACKUP but nothing else.
In short, providing granular commands like this lets the client be
really intelligent even if the server isn't, and lets the client have
fine-grained control of the process. This is very good if you're an
out-of-core tool maintainer and your tool is trying to be smarter than
- or even just differently-designed than - core.
But if what you really want is just a maximally-efficient parallel
backup, you don't need the commands to be fine-grained like this. You
don't even really *want* the commands to be fine-grained like this,
because it's better if the server works it all out so as to avoid
unnecessary network round-trips. You just want to tell the server
"hey, I want to do a parallel backup with 5 participants - hit me!"
and have it do that in the most efficient way that it knows how,
without forcing the client to make any decisions that can be made just
as well, and perhaps more efficiently, on the server.
On the third hand, one advantage of having the fine-grained commands
is that it would not only make it easier for out-of-core tools to do
cool things, but also in-core tools. For instance, you can imagine
being able to do something like:
pg_basebackup -D outputdir -d conninfo --copy-files-from=$PGDATA
If the client is using what I'm calling fine-grained commands, this is
easy to implement. If it's just calling a piece of server side
functionality that sends back a tarball as a blob, it's not.
So each approach has some pros and cons.
> I'm disappointed that the concerns about the trouble that end users are
> likely to have with this didn't garner more discussion.
Well, we can keep discussing things. I've tried to reply to as many
of your concerns as I can, but I believe you've written more email on
this thread than everyone else combined, so perhaps I haven't entirely
been able to keep up.
That being said, as far as I can tell, those concerns were not
seconded by anyone else. Also, if I understand correctly, when I
asked how we could avoid that problem, you that you didn't know. And
I said it seemed like we would need to a very expensive operation at
server startup, or magic. So I feel that perhaps it is a problem that
(1) is not of great general concern and (2) to which no really
superior engineering solution is possible.
I may, however, be mistaken.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Alvaro Herrera | 2019-04-22 20:15:08 | translatability tweaks |
Previous Message | Laurenz Albe | 2019-04-22 19:37:25 | Re: pgsql: Allow insert and update tuple routing and COPY for foreign table |