From: | Shaun Thomas <sthomas(at)optionshouse(dot)com> |
---|---|
To: | "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org> |
Subject: | Backup Question |
Date: | 2013-10-22 13:47:23 |
Message-ID: | 0683F5F5A5C7FE419A752A034B4A0B979743D856@sswchi5pmbx2.peak6.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hey everyone,
This should be pretty straight-forward, but figured I'd pass it by anyway.
I have a revised backup process that's coming out inconsistent, and I'm not entirely sure why. I call pg_start_backup(), tar.gz the contents elsewhere, then pg_stop_backup(). Nothing crazy. Upon restore, two of my tables report duplicate IDs upon executing my redaction scripts. The "duplicate" records ended up having different ctid's, suggesting the log replay was incomplete. However, nothing in the restore logs suggest this is the case, and either way, the database wouldn't have come up if it were. (right?)
Now, the main difference, is that I'm doing the backup process on our streaming replication node. The backup process calls the pg_start_backup() function on the upstream provider, backs up the local content, then calls pg_stop_backup() on the upstream provider. In both cases, it captures the start/stop transaction log positions to grab all involved archived WAL files. I already know the start xlog position is insufficient, because those transaction logs may not have replayed on the standby yet, so I also grab 3xcheckpoint_timeout extra older files (before backup start), just in case.
So, I get no complaints of missing or damaged archive log files. Yet the restore is invalid. I checked the upstream, and those duplicate rows are not present; it's clearly the backup that's at fault. I remember having this problem a couple years ago, but I "fixed" it by working filesystem snapshots into the backup script. I can do that again, but it seems like overkill, honestly.
Why am I using my own backup system instead of pg_basebackup, or Barman or something? Because I use pigz for parallel compression and hard links to save space. I can back up a 800GB database in less than 20 minutes a night, or 45 minutes for a non-incremental backup. Without disturbing the primary node. Like I said, I can enable filesystem snapshots to fix this, but it feels like something more obvious is going on.
Any ideas?
--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd | Suite 500 | Chicago IL, 60604
312-676-8870
sthomas(at)optionshouse(dot)com
______________________________________________
See http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email
From | Date | Subject | |
---|---|---|---|
Next Message | Merlin Moncure | 2013-10-22 13:51:51 | Re: Count of records in a row |
Previous Message | Rémi Cura | 2013-10-22 13:41:33 | Re: Count of records in a row |