From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Alvaro Herrera <alvherre(at)2ndquadrant(dot)com> |
Cc: | Guillaume Lelarge <guillaume(at)lelarge(dot)info>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, PostgreSQL Bugs List <pgsql-bugs(at)postgresql(dot)org>, Josh Berkus <josh(at)agliodbs(dot)com> |
Subject: | Re: Confusing error message with too-large file in pg_basebackup |
Date: | 2015-11-21 20:58:41 |
Message-ID: | 10137.1448139521@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-bugs |
I wrote:
> I looked into the GNU tar sources and confirmed that gtar supports this
> concept. (It looks from the GNU sources like they've supported it for
> a *really long time*, like since the 90's, in which case wikipedia's
> credit to "star" for the idea is full of it.)
I've now also verified that the "tar" present on OS X, which is bsdtar,
handles the GNU base-256 convention just fine, and has done so at least
since Snow Leopard (released 2009).
> Hence, I propose something like the attached (WIP, has no doc
> corrections). I've done simple testing on the pg_dump/pg_restore code
> path, but not on basebackup --- anyone want to test that?
The attached updated patch fixes the pg_dump docs (pg_basebackup doesn't
address the point, so nothing to fix there), and just for cleanliness'
sake, gets rid of all use of sprintf and sscanf to process tar headers.
I've now tested the basebackup path, and in addition done testing on a
32-bit machine, which caused me to realize that the existing code is
even more broken than I thought. Quite aside from the documented 8GB
limitation, we have these issues:
* tarCreateHeader's file-size argument is size_t, which means that on
32-bit machines it will write a corrupt tar header for file sizes between
4GB and 8GB, even though no error is raised (if pgoff_t is 64 bits, which
it will be on just about any remotely modern platform). This is very
very bad: you could have an unreadable backup and not know it. We
evidently broke this in 9.3 while refactoring the tar-writing logic so
basebackup could use it.
* pg_restore fails on tables of size between 4GB and 8GB on machines
where either size_t or unsigned long is 32 bits, even if the archive was
written by a 64-bit machine and hence is not corrupted by the previous
bug. If you use Windows 64 you don't even need to transport the archive
to a different machine to get bitten by this. This bug goes back a very
long way.
* pg_basebackup will fail if there are files of size between 4GB and 8GB,
even on 64-bit machines, because it tells sscanf to read the file size
as "unsigned int".
* As I noted yesterday, pg_basebackup fails entirely, for any file size,
on 64-bit big-endian machines.
These all seem to me to be must-fix issues even if we didn't want to
back-port use of the base-256 convention. Seeing that we have to deal
with these things, I'm inclined to just back-port the whole patch to
all branches.
regards, tom lane
Attachment | Content-Type | Size |
---|---|---|
remove-tar-size-limit-2.patch | text/x-diff | 18.8 KB |
From | Date | Subject | |
---|---|---|---|
Next Message | calebmeredith8 | 2015-11-22 13:23:27 | BUG #13780: Multiple commands not allowed for creating a policy. |
Previous Message | David Gould | 2015-11-21 10:52:16 | Re: Confusing error message with too-large file in pg_basebackup |