Re: Stuck trying to backup large database - best practice?

From: Joseph Kregloh <jkregloh(at)sproutloud(dot)com>
To: Antony Gelberg <antony(dot)gelberg(at)gmail(dot)com>
Cc: Adrian Klaver <adrian(dot)klaver(at)aklaver(dot)com>, pgsql-general <pgsql-general(at)postgresql(dot)org>
Subject: Re: Stuck trying to backup large database - best practice?
Date: 2015-01-12 23:01:36
Message-ID: CAAW2xfcUZBvbpRO7HDrcTYNeT8FpRzT+-+KZ08vzByNXN76Cwg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

I apologize if it has already been suggested. I already deleted the
previous emails in this chain.

Have you looked into Barman? My current database is just a tad over 1TB. I
have one master, two slaves, and another machine running Barman. The slaves
are there for redundancy purposes. Master fails, a slave gets promoted. The
backups are all done by Barman. This allows for PITR. I do not run any
backup software on the database server, but I do on the Barman server. In
Barman I keep a 7 day retention policy, then I have Bacula backing that up
with a 1 month retention policy. So theoretically I could do a PITR up to a
month in the past.

Thanks,
-Joseph Kregloh

On Mon, Jan 12, 2015 at 5:16 PM, Antony Gelberg <antony(dot)gelberg(at)gmail(dot)com>
wrote:

> On Mon, Jan 12, 2015 at 7:08 PM, Adrian Klaver
> <adrian(dot)klaver(at)aklaver(dot)com> wrote:
> >
> > On 01/12/2015 08:40 AM, Antony Gelberg wrote:
> >>
> >> On Mon, Jan 12, 2015 at 6:23 PM, Adrian Klaver
> >> <adrian(dot)klaver(at)aklaver(dot)com> wrote:
> >>>
> >>> On 01/12/2015 08:10 AM, Antony Gelberg wrote:
> >>>>
> >>>> On Mon, Jan 12, 2015 at 5:31 PM, Adrian Klaver
> >>>> <adrian(dot)klaver(at)aklaver(dot)com> wrote:
> >>> pg_basebackup has additional features which in your case are creating
> >>> issues. pg_dump on the other hand is pretty much a straight forward
> data
> >>> dump and if you use -Fc you get compression.
> >>
> >>
> >> So I should clarify - we want to be able to get back to the same point
> >> as we would once the WAL was applied. If we were to use pg_dump,
> >> would we lose out in any way?
> >
> >
> > pg_dump does not save WALs, so it would not work for that purpose.
> >
> > Appreciate insight as to how
> >>
> >> pg_basebackup is scuppering things.
> >
> >
> > From original post it is not entirely clear whether you are using the -X
> or -x options. The command you show does not have them, but you mention
> -Xs. In any case it seems wal_keep_segments will need to be bumped up to
> keep WAL segments around that are being recycled during the backup process.
> How much will depend on a determination of fast Postgres is using/recycling
> log segments? Looking at the turnover in the pg_xlog directory would be a
> start.
>
> The original script used -xs, but that didn't make sense, so we used
> -Xs in the end, but then we cancelled the backup as we assumed that we
> wouldn't have enough space for it uncompressed. Did we miss
> something?
>
> I think your suggestion of looking in pg_xlog and tweaking
> wal_keep_segments is interesting, we'll take a look, and I'll report
> back with findings.
>
> Thanks for your very detailed help.
>
> Antony
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general(at)postgresql(dot)org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Adrian Klaver 2015-01-12 23:04:10 Re: Stuck trying to backup large database - best practice?
Previous Message Antony Gelberg 2015-01-12 22:16:13 Re: Stuck trying to backup large database - best practice?