From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | John Sherwood <john(dot)sherwood(dot)was(dot)taken(at)gmail(dot)com> |
Cc: | pgsql-sql(at)postgresql(dot)org |
Subject: | Re: Backups failing despite many settings that should prevent it |
Date: | 2015-07-07 14:28:30 |
Message-ID: | 24872.1436279310@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-sql |
John Sherwood <john(dot)sherwood(dot)was(dot)taken(at)gmail(dot)com> writes:
> I've been getting some backup failures when trying to back up a fairly
> large database. I've tried setting:
> * vacuum_defer_cleanup_age very high on the master
> * hot_standby_feedback = on on the slave
> * max_standby_streaming_delay = 300s and max_standby_archive_delay = 300s
> on the slave
> Nothing seems to stop it failing intermittently. The error I'm getting is
> below:
> pg_dump: Dumping the contents of table "impressions" failed:
> PQgetResult() failed.
> pg_dump: Error message from server: ERROR: canceling statement due to
> conflict with recovery
> DETAIL: User was holding a relation lock for too long.
> pg_dump: The command was: COPY public.impressions (id, ... ) TO stdout;
> The following system errors were returned:
> Errno::EPERM: Operation not permitted - 'pg_dump' returned exit code: 1
> Any ideas what setting I'm missing?
Sure looks like max_standby_streaming_delay violation from here. You sure
300s is enough time to back up your "fairly large" database? How long
does the pg_dump run before failing, anyway?
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Tim Clotworthy | 2015-07-08 16:30:29 | Oracle to PostgreSQL Migration - Need Information |
Previous Message | John Sherwood | 2015-07-07 13:35:23 | Backups failing despite many settings that should prevent it |