From: | Jerry Sievers <gsievers19(at)comcast(dot)net> |
---|---|
To: | John Scalia <jayknowsunix(at)gmail(dot)com> |
Cc: | Matheus de Oliveira <matioli(dot)matheus(at)gmail(dot)com>, "pgsql-admin\(at)postgresql(dot)org" <pgsql-admin(at)postgresql(dot)org> |
Subject: | Re: Hourly backup using pg_basebackup |
Date: | 2015-02-06 21:12:28 |
Message-ID: | 86egq2hhhf.fsf@jerry.enova.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
John Scalia <jayknowsunix(at)gmail(dot)com> writes:
> On 2/6/2015 2:25 PM, Matheus de Oliveira wrote:
>
> On Fri, Feb 6, 2015 at 4:53 PM, John Scalia <jayknowsunix(at)gmail(dot)com> wrote:
>
> We have a python script called by cron on an hourly basis to back up our production database. Currently, the script invokes pg_dump and takes more than hour to
> complete. Hence the script looks to see if it's already running and exits if so. I want to change the script so it uses pg_basebackup instead since that's so
> much faster.
>
> Have you considered using incremental backup (continuous archiving) instead of a such small backup window?
>
> See [1]
>
> My problem is, however, that while I'd like to just have it build a tarball, maybe compressed, I can't use a "-X s" option for the wal segments. I think I
> understand why I can't use the streaming option with a "-Ft" specified. I'm just concerned about the docs saying that the backup may have problems with fetch as
> a wal segment may have expired. Manually testing is showing that the Db needs about 11 minutes to backup with pg_basebackup, and our wal_keep_segments setting
> is 6. This said, an hour's worth of wal segments should be available, but the six that were there at the beginning of the backup are not the same six there at
> the end. I don't think this is really a problem, but I'd like to get it confirmed. Wouldn't the backup actually have to take more than hour for this to be an
> issue?
>
> If you use archiving [1], you don't need to worry about saving the segments withing the backup, just let it be done through archive_command. Isn't that an option
> for you? If don't, why?
>
> Oh, yes, I did fail to mention that the system where I'm trying this is the primary in a streaming replication cluster with (2) hot standby servers. I've mentioned
> without much traction, that we don't really even need a backup with 3 servers ready to do the work without delays. Like I said in the last post, it's all political.
Er, yes you do.
Suppose someone/thing drops a table erroneously?
Do that and all dozens of your streamers are trash.
>
> [1] http://www.postgresql.org/docs/current/static/continuous-archiving.html
>
> Best regards,
> --
> Matheus de Oliveira
> Analista de Banco de Dados
> Dextra Sistemas - MPS.Br nÃvel F!
> www.dextra.com.br/postgres
>
--
Jerry Sievers
Postgres DBA/Development Consulting
e: postgres(dot)consulting(at)comcast(dot)net
p: 312.241.7800
From | Date | Subject | |
---|---|---|---|
Next Message | ALEXANDER JOSE | 2015-02-07 19:12:25 | Error Postgresql 9.3.4 |
Previous Message | Devrim Gündüz | 2015-02-06 20:41:09 | Re: Can't uninstall rpm package. |