| From: | Daniel Farina <daniel(at)heroku(dot)com> |
|---|---|
| To: | Peter Geoghegan <pg(at)heroku(dot)com> |
| Cc: | Pg Hackers <pgsql-hackers(at)postgresql(dot)org> |
| Subject: | Re: Better handling of archive_command problems |
| Date: | 2013-05-14 04:23:05 |
| Message-ID: | CAAZKuFYbMogz0k-hMrzKPDQKhAai20eL4aVa-Z8=QyfRkYwTqg@mail.gmail.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-hackers |
On Mon, May 13, 2013 at 3:02 PM, Peter Geoghegan <pg(at)heroku(dot)com> wrote:
> Has anyone else thought about approaches to mitigating the problems
> that arise when an archive_command continually fails, and the DBA must
> manually clean up the mess?
Notably, the most common problem in this vein suffered at Heroku has
nothing to do with archive_command failing, and everything to do with
the ratio of block device write performance (hence, backlog) versus
the archiving performance. When CPU is uncontended it's not a huge
deficit, but it is there and it causes quite a bit of stress.
Archive commands failing are definitely a special case there, where it
might be nice to bring write traffic to exactly zero for a time.
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Michael Paquier | 2013-05-14 04:51:42 | Re: Parallel Sort |
| Previous Message | Peter Eisentraut | 2013-05-14 03:13:57 | Re: commit fest schedule for 9.4 |