From: | Antony Gelberg <antony(dot)gelberg(at)gmail(dot)com> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Stuck trying to backup large database - best practice? |
Date: | 2015-01-12 15:20:37 |
Message-ID: | CADbCqvFvnGm0AhfQscDQLK7FhJu11JvAgTLsE0yN9U2rtLcSQA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hi,
We have a postgres 9.3.x box, with 1.3TB free space, and our database of
around 1.8TB. Unfortunately, we're struggling to back it up.
When we try a compressed backup with the following command:
pg_basebackup -D "$BACKUP_PATH/$TIMESTAMP" -Ft -Z9 -P -U "$DBUSER" -w
we get error:
pg_basebackup: could not get transaction log end position from server:
ERROR: requested WAL segment 0000000400002B9F000000B4 has already been
removed
This attempted backup reached 430GB before failing.
We were advised on IRC to try -Xs, but that only works with a plain
(uncompressed) backup, and as you'll note from above, we don't have enough
disk space for this.
Is there anything else we can do apart from get a bigger disk (not trivial
at the moment)? Any best practice?
I suspect that setting up WAL archiving and / or playing with the
wal_keep_segments setting might help, but as you can probably gather, I'd
like to be sure that I'm doing something sane before I dive in.
Happy to give more detail if required.
Antony
From | Date | Subject | |
---|---|---|---|
Next Message | Adrian Klaver | 2015-01-12 15:31:52 | Re: Stuck trying to backup large database - best practice? |
Previous Message | Brent Tubbs | 2015-01-12 05:19:51 | Re: unexpected PQresultStatus: 8 with simple logical replication |