From: | Fujii Masao <masao(dot)fujii(at)gmail(dot)com> |
---|---|
To: | PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | pg_basebackup failed to back up large file |
Date: | 2014-06-03 14:19:37 |
Message-ID: | CAHGQGwH0OKZ6cKpJKCWOjGa3ejwfFm1eNrmRO3dkdoTeaai-eg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Hi,
I got the bug report of pg_basebackup off-list that it causes an error
when there is large file (e.g., 4GB) in the database cluster. It's easy
to reproduce this problem.
$ dd if=/dev/zero of=$PGDATA/test bs=1G count=4
$ pg_basebackup -D hoge -c fast
pg_basebackup: invalid tar block header size: 32768
2014-06-03 22:56:50 JST data LOG: could not send data to client: Broken pipe
2014-06-03 22:56:50 JST data ERROR: base backup could not send data,
aborting backup
2014-06-03 22:56:50 JST data FATAL: connection to client lost
The cause of this problem is that pg_basebackup uses an integer to
store the size of the file to receive from the server and an integer
overflow can happen when the file is very large. I think that
pg_basebackup should be able to handle even such large file properly
because it can exist in the database cluster, for example,
the server log file under $PGDATA/pg_log can be such large one.
Attached patch changes pg_basebackup so that it uses uint64 to store
the file size and doesn't cause an integer overflow.
Thought?
Regards,
--
Fujii Masao
Attachment | Content-Type | Size |
---|---|---|
0001-Fix-pg_basebackup-so-that-it-can-back-up-even-large-.patch | text/x-patch | 1.8 KB |
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2014-06-03 14:24:46 | Re: [HACKERS] BUG #9652: inet types don't support min/max |
Previous Message | Tom Lane | 2014-06-03 14:14:11 | Re: Re-create dependent views on ALTER TABLE ALTER COLUMN ... TYPE? |