From: | Peter Geoghegan <pg(at)heroku(dot)com> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Daniel Farina <daniel(at)heroku(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Better handling of archive_command problems |
Date: | 2013-05-16 18:42:41 |
Message-ID: | CAM3SWZQ1HwokhdGym2_5nfZq6fefPZJVwnA4ti+ESmMZmhxg6Q@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Thu, May 16, 2013 at 11:16 AM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
> Well, I think it IS a Postgres precept that interrupts should get a
> timely response. You don't have to agree, but I think that's
> important.
Well, yes, but the fact of the matter is that it is taking high single
digit numbers of seconds to get a response at times, so I don't think
that there is any reasonable expectation that that be almost
instantaneous. I don't want to make that worse, but then it might be
worth it in order to ameliorate a particular pain point for users.
>> There is a setting called zero_damaged_pages, and enabling it causes
>> data loss. I've seen cases where it was enabled within postgresql.conf
>> for years.
>
> That is both true and bad, but it is not a reason to do more bad things.
I don't think it's bad. I think that we shouldn't be paternalistic
towards our users. If anyone enables a setting like zero_damaged_pages
(or, say, wal_write_throttle) within their postgresql.conf
indefinitely for no good reason, then they're incompetent. End of
story.
Would you feel better about it if the setting had a time-out? Say, the
user had to explicitly re-enable it after one hour at the most?
--
Peter Geoghegan
From | Date | Subject | |
---|---|---|---|
Next Message | Dean Rasheed | 2013-05-16 19:41:04 | [9.3] Automatically updatable views vs writable foreign tables |
Previous Message | Robert Haas | 2013-05-16 18:16:23 | Re: Better handling of archive_command problems |