From: | David Steele <david(at)pgmasters(dot)net> |
---|---|
To: | Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Stephen Frost <sfrost(at)snowman(dot)net>, Andres Freund <andres(at)anarazel(dot)de>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: PATCH: Exclude unlogged tables from base backups |
Date: | 2018-01-24 18:25:16 |
Message-ID: | f2c8f9dc-6515-0396-5f82-34dc4e93298e@pgmasters.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Hi Masahiko,
Thanks for the review!
On 1/22/18 3:14 AM, Masahiko Sawada wrote:
> On Thu, Dec 14, 2017 at 11:58 PM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
>>
>> We would also have a problem if the missing file caused something in
>> recovery to croak on the grounds that the file was expected to be
>> there, but I don't think anything works that way; I think we just
>> assume missing files are an expected failure mode and silently do
>> nothing if asked to remove them.
>
> I also couldn't see a problem in this approach.
>
> Here is the first review comments.
>
> + unloggedDelim = strrchr(path, '/');
>
> I think it doesn't work fine on windows. How about using
> last_dir_separator() instead?
I think this function is OK on Windows -- we use it quite a bit.
However, last_dir_separator() is clearer so I have changed it.
> ----
> + * Find all unlogged relations in the specified directory and return
> their OIDs.
>
> What the ResetUnloggedrelationsHash() actually returns is a hash
> table. The comment of this function seems not appropriate.
Fixed.
> + /* Part of path that contains the parent directory. */
> + int parentPathLen = unloggedDelim - path;
> +
> + /*
> + * Build the unlogged relation hash if the parent path is either
> + * $PGDATA/base or a tablespace version path.
> + */
> + if (strncmp(path, "./base", parentPathLen) == 0 ||
> + (parentPathLen >=
> (sizeof(TABLESPACE_VERSION_DIRECTORY) - 1) &&
> + strncmp(unloggedDelim -
> (sizeof(TABLESPACE_VERSION_DIRECTORY) - 1),
> + TABLESPACE_VERSION_DIRECTORY,
> +
> sizeof(TABLESPACE_VERSION_DIRECTORY) - 1) == 0))
> + unloggedHash = ResetUnloggedRelationsHash(path);
> + }
>
> How about using get_parent_directory() to get parent directory name?
get_parent_directory() munges the string that is passed to it which I
was trying to avoid (we'd need a copy) - and I don't think it makes the
rest of the logic any simpler without constructing yet another string to
hold the tablespace path.
I know performance isn't the most important thing here, so if the
argument is for clarity perhaps it makes sense. Otherwise I don't know
if it's worth it.
> Also, I think it's better to destroy the unloggedHash after use.
Whoops! Fixed.
> + /* Exclude all forks for unlogged tables except the
> init fork. */
> + if (unloggedHash && ResetUnloggedRelationsMatch(
> + unloggedHash, de->d_name) == unloggedOther)
> + {
> + elog(DEBUG2, "unlogged relation file \"%s\"
> excluded from backup",
> + de->d_name);
> + continue;
> + }
>
> I think it's better to log this debug message at DEBUG2 level for
> consistency with other messages.
I think you mean DEBUG1? It's already at DEBUG2.
I considered using DEBUG1 but decided against it. The other exclusions
will produce a limited amount of output because there are only a few of
them. In the case of unlogged tables there could be any number of
exclusions and I thought that was too noisy for DEBUG1.
> + ok(!-f "$tempdir/tbackup/tblspc1/$tblspc1UnloggedBackupPath",
> + 'unlogged imain fork not in tablespace backup');
>
> s/imain/main/
Fixed.
> If a new unlogged relation is created after constructed the
> unloggedHash before sending file, we cannot exclude such relation. It
> would not be problem if the taking backup is not long because the new
> unlogged relation unlikely becomes so large. However, if takeing a
> backup takes a long time, we could include large main fork in the
> backup.
This is a good point. It's per database directory which makes it a
little better, but maybe not by much.
Three options here:
1) Leave it as is knowing that unlogged relations created during the
backup may be copied and document it that way.
2) Construct a list for SendDir() to work against so the gap between
creating that and creating the unlogged hash is as small as possible.
The downside here is that the list may be very large and take up a lot
of memory.
3) Check each file that looks like a relation in the loop to see if it
has an init fork. This might affect performance since an
opendir/readdir loop would be required for every relation.
Personally, I'm in favor of #1, at least for the time being. I've
updated the docs as indicated in case you and Adam agree.
New patches attached.
Thanks!
--
-David
david(at)pgmasters(dot)net
Attachment | Content-Type | Size |
---|---|---|
exclude-unlogged-v2-01.patch | text/plain | 4.0 KB |
exclude-unlogged-v2-02.patch | text/plain | 7.1 KB |
exclude-unlogged-v2-03.patch | text/plain | 7.2 KB |
From | Date | Subject | |
---|---|---|---|
Next Message | Andres Freund | 2018-01-24 18:28:41 | Re: pgsql: Add parallel-aware hash joins. |
Previous Message | Stephen Frost | 2018-01-24 18:24:19 | Re: [HACKERS] Patch: Add --no-comments to skip COMMENTs with pg_dump |